é份ęŖę”ęč¬č§£ä»„äøå¹¾åéØå:¶
- X-ray image normalization
- å¦ä½ę mask image č½ēŗ bounding box
- å¦ä½ęč³ęéč½ęēŗ COCO format
- å¦ä½å°č³ęåēŗ .jpg & .json
Rename files & Save .csv as UTF-8 encoding format¶
- äøå³č³ęåå å°č³ę夾å稱ęęč±ę:
- normal -> normal
- åæčč„大 -> cardiac_hypertrophy
- äø»åč甬é£å -> aortic_atherosclerosis_calcification
- äø»åčå½ę² -> aortic_curvature
- čŗå°ččå¢å -> intercostal_pleural_thickening
- čŗé浸潤å¢å -> lung_field_infiltration
- čøę¤éåę§éēÆē č® -> degenerative_joint_disease_of_the_thoracic_spine
- čę¤å“å½ -> scoliosis
- å¦å¤ļ¼č¦å åØčŖå·±ēé»č ¦ēØčØäŗę¬ęé csv ęŖļ¼äø¦å¦åēŗ utf-8 編碼åäøå³ļ¼äøē¶ęéęåŗē¾äŗē¢¼ļ¼
Check data & images¶
# import libraries
# basic
import warnings
warnings.filterwarnings('ignore')
import os
import random
import pydicom
import itertools
import numpy as np
import pandas as pd
from sklearn.preprocessing import MultiLabelBinarizer
from skmultilearn.model_selection import iterative_train_test_split
# visualization
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.gridspec as gridspec
# object detection
import json
from skimage.measure import label as sk_label
from skimage.measure import regionprops as sk_regions
from torchvision.transforms.v2 import functional as F
class config:
root = "/kaggle/input/hwk05-data/hwk05_data" ## your own base root path
seed = 42
def seed_everything(seed):
# Set Python random seed
random.seed(seed)
# Set NumPy random seed
np.random.seed(seed)
seed_everything(config.seed)
# training dataframe
train_df = pd.read_csv("/kaggle/input/hwk05-data/hwk05_data/train/train.csv", encoding='big5')
train_df
| ID | category | Width | Height | Filename | ImagePath | MarkPath | |
|---|---|---|---|---|---|---|---|
| 0 | TDR04_20180315_075734 | normal | 2328 | 2344 | 220_97.dcm | /normal/image/220_97.dcm | /normal/mark/220_97.dcm.jpg |
| 1 | TDR04_20180315_080518 | normal | 2472 | 2560 | 220_94.dcm | /normal/image/220_94.dcm | /normal/mark/220_94.dcm.jpg |
| 2 | TDR04_20180315_081322 | normal | 2312 | 2496 | 220_93.dcm | /normal/image/220_93.dcm | /normal/mark/220_93.dcm.jpg |
| 3 | TDR04_20180315_081746 | normal | 2448 | 2584 | 220_92.dcm | /normal/image/220_92.dcm | /normal/mark/220_92.dcm.jpg |
| 4 | TDR04_20180315_082113 | normal | 2144 | 2384 | 220_91.dcm | /normal/image/220_91.dcm | /normal/mark/220_91.dcm.jpg |
| ... | ... | ... | ... | ... | ... | ... | ... |
| 446 | TDR02_20161209_161439 | čę¤å“å½ | 2376 | 2592 | 4440_5.dcm | /čę¤å“å½/image/4440_5.dcm | /čę¤å“å½/mark/4440_5.dcm.jpg |
| 447 | TDR04_20180224_084933 | čę¤å“å½ | 2248 | 2600 | 4440_0.dcm | /čę¤å“å½/image/4440_0.dcm | /čę¤å“å½/mark/4440_0.dcm.jpg |
| 448 | TDR04_20180226_082354 | čę¤å“å½ | 2488 | 2456 | 4440.dcm | /čę¤å“å½/image/4440.dcm | /čę¤å“å½/mark/4440.dcm.jpg |
| 449 | TDR01_20171106_095308 | čę¤å“å½ | 2320 | 2376 | A0_29.dcm | /čę¤å“å½/image/A0_29.dcm | /čę¤å“å½/mark/A0_29.dcm.jpg |
| 450 | TDR01_20171108_100516 | čę¤å“å½ | 2280 | 2288 | A0_28.dcm | /čę¤å“å½/image/A0_28.dcm | /čę¤å“å½/mark/A0_28.dcm.jpg |
451 rows Ć 7 columns
ę“ę¹é”å„å稱ēŗč±ę¶
# all classes
category = {
"åæčč„大": "cardiac_hypertrophy",
"äø»åč甬é£å": "aortic_atherosclerosis_calcification",
"äø»åčå½ę²": "aortic_curvature",
"čŗå°ččå¢å": "intercostal_pleural_thickening",
"čŗé浸潤å¢å ": "lung_field_infiltration",
"čøę¤éåę§éēÆē
č®": "degenerative_joint_disease_of_the_thoracic_spine",
"čę¤å“å½": "scoliosis",
"normal": "normal"
}
# change category names to English
def change_to_eng_names(df):
df["category"] = df["category"].apply(lambda x: category[x])
df["ImagePath"] = df.apply(lambda df: "/".join([df["category"], "image", df["Filename"]]), axis=1)
df["MarkPath"] = df.apply(lambda df: "/".join([df["category"], "mark", df["Filename"] + ".jpg"]), axis=1)
change_to_eng_names(train_df)
train_df
| ID | category | Width | Height | Filename | ImagePath | MarkPath | |
|---|---|---|---|---|---|---|---|
| 0 | TDR04_20180315_075734 | normal | 2328 | 2344 | 220_97.dcm | normal/image/220_97.dcm | normal/mark/220_97.dcm.jpg |
| 1 | TDR04_20180315_080518 | normal | 2472 | 2560 | 220_94.dcm | normal/image/220_94.dcm | normal/mark/220_94.dcm.jpg |
| 2 | TDR04_20180315_081322 | normal | 2312 | 2496 | 220_93.dcm | normal/image/220_93.dcm | normal/mark/220_93.dcm.jpg |
| 3 | TDR04_20180315_081746 | normal | 2448 | 2584 | 220_92.dcm | normal/image/220_92.dcm | normal/mark/220_92.dcm.jpg |
| 4 | TDR04_20180315_082113 | normal | 2144 | 2384 | 220_91.dcm | normal/image/220_91.dcm | normal/mark/220_91.dcm.jpg |
| ... | ... | ... | ... | ... | ... | ... | ... |
| 446 | TDR02_20161209_161439 | scoliosis | 2376 | 2592 | 4440_5.dcm | scoliosis/image/4440_5.dcm | scoliosis/mark/4440_5.dcm.jpg |
| 447 | TDR04_20180224_084933 | scoliosis | 2248 | 2600 | 4440_0.dcm | scoliosis/image/4440_0.dcm | scoliosis/mark/4440_0.dcm.jpg |
| 448 | TDR04_20180226_082354 | scoliosis | 2488 | 2456 | 4440.dcm | scoliosis/image/4440.dcm | scoliosis/mark/4440.dcm.jpg |
| 449 | TDR01_20171106_095308 | scoliosis | 2320 | 2376 | A0_29.dcm | scoliosis/image/A0_29.dcm | scoliosis/mark/A0_29.dcm.jpg |
| 450 | TDR01_20171108_100516 | scoliosis | 2280 | 2288 | A0_28.dcm | scoliosis/image/A0_28.dcm | scoliosis/mark/A0_28.dcm.jpg |
451 rows Ć 7 columns
ē«åŗ8種é”å„ē第äøå¼µ image & mask ä½ē½®Ā¶
é裔å¦ęé”å„ęÆ normalļ¼å°±čŖč”ēęäøå shape ååå§å½±åēøåē maskć
temp = train_df[train_df["category"].duplicated() == False]
temp
| ID | category | Width | Height | Filename | ImagePath | MarkPath | |
|---|---|---|---|---|---|---|---|
| 0 | TDR04_20180315_075734 | normal | 2328 | 2344 | 220_97.dcm | normal/image/220_97.dcm | normal/mark/220_97.dcm.jpg |
| 80 | TDR04_20180227_083423 | aortic_curvature | 2504 | 2536 | 220_14.dcm | aortic_curvature/image/220_14.dcm | aortic_curvature/mark/220_14.dcm.jpg |
| 132 | TDR01_20190313_090724 | aortic_atherosclerosis_calcification | 2392 | 2600 | 10_1d.dcm | aortic_atherosclerosis_calcification/image/10_... | aortic_atherosclerosis_calcification/mark/10_1... |
| 203 | TDR04_20180226_090403 | cardiac_hypertrophy | 2008 | 2280 | 4440.dcm | cardiac_hypertrophy/image/4440.dcm | cardiac_hypertrophy/mark/4440.dcm.jpg |
| 236 | TDR05_20151105_094209 | intercostal_pleural_thickening | 2296 | 2512 | 4440_4.dcm | intercostal_pleural_thickening/image/4440_4.dcm | intercostal_pleural_thickening/mark/4440_4.dcm... |
| 265 | TDR04_20180227_083423 | lung_field_infiltration | 2504 | 2536 | 220_3.dcm | lung_field_infiltration/image/220_3.dcm | lung_field_infiltration/mark/220_3.dcm.jpg |
| 333 | TDR04_20180227_085056 | degenerative_joint_disease_of_the_thoracic_spine | 2336 | 2360 | 220_15.dcm | degenerative_joint_disease_of_the_thoracic_spi... | degenerative_joint_disease_of_the_thoracic_spi... |
| 393 | TDR01_20171109_083459 | scoliosis | 2232 | 2408 | A0_26.dcm | scoliosis/image/A0_26.dcm | scoliosis/mark/A0_26.dcm.jpg |
def plot_images_and_marks(df):
temp = df[df["category"].duplicated() == False]
rows, cols = 4, 2
fig = plt.figure(figsize = (16, 16))
grid = plt.GridSpec(rows, cols)
for i in range(rows * cols):
image = pydicom.dcmread(os.path.join(config.root, "train", temp.iloc[i, 5])).pixel_array
if temp.iloc[i, 1] != "normal":
mark = np.array(Image.open(os.path.join(config.root, "train", temp.iloc[i, 6])))
else:
mark = np.zeros((image.shape[0], image.shape[1]))
categories = fig.add_subplot(grid[i])
categories.set_title(f"{temp.iloc[i, 1]}\n", fontweight = 'semibold', size = 14)
categories.set_axis_off()
gs = gridspec.GridSpecFromSubplotSpec(1, 2, subplot_spec = grid[i])
ax = fig.add_subplot(gs[0])
ax.imshow(image, cmap = "gray")
ax.set_title("Image")
ax.axis("off")
ax = fig.add_subplot(gs[1], sharey = ax)
ax.imshow(mark, cmap = "gray")
ax.set_title("Mark")
ax.axis("off")
fig.patch.set_facecolor('white')
fig.suptitle("Images and marks of 8 categories\n", fontweight = 'bold', size = 16)
fig.tight_layout()
plot_images_and_marks(train_df)
X-ray image normalization¶
ééØåéå®ę intensity log-transformation č· simplest color balance algorithm ļ¼ē®ēęÆēŗäŗč½ęå½±ååę
åčŖæę“ L & R å樣ē亮度ć
仄 ID ēŗ TDR02_20161209_161439 ēē
ę£ēŗä¾ļ¼č¼øåŗč½ęååč½ęå¾ē X-ray å½±åļ¼
def X_ray_normalization(dcm_file, vmin, vmax):
img = pydicom.dcmread(dcm_file)
origin = img.pixel_array
# needed values
WW = img.WindowWidth
WC = img.WindowCenter
BitsStored = img.BitsStored
# Compute min and max intensity bounds
imin = WC - (WW / 2)
imax = WC + (WW / 2)
# Clip pixel values based on imin and imax
clipped = np.clip(origin, imin, imax)
# Perform intensity log-transformation
log_img = -np.log((1 + clipped) / (2 ** BitsStored))
# simplest color balance algorithm
# lower_bound = np.percentile(log_img, vmin)
# upper_bound = np.percentile(log_img, vmax)
normalize_img = (log_img - vmin) / (vmax - vmin)
normalize_img = np.clip(normalize_img, 0, 1)
return origin, log_img, normalize_img
def plot_before_and_after(ID, df):
patient_df = df[df["ID"] == ID]
path = os.path.join(config.root, "train", patient_df.iloc[0, 5])
origin, log_img, normalize_img = X_ray_normalization(path, vmin = 0, vmax = 2.5)
plt.figure(figsize = (16, 16))
fig, ax = plt.subplots(1, 3)
np.vectorize(lambda ax: ax.axis('off'))(ax)
plt.subplots_adjust(wspace = None, hspace = None)
ax[0].imshow(origin, cmap = "gray")
ax[0].set_title("Original Image", size = 8)
ax[1].imshow(log_img, cmap = "gray")
ax[1].set_title("After Log-transformation", size = 8)
ax[2].imshow(normalize_img, cmap = "gray")
ax[2].set_title("After Normalization", size = 8)
fig.suptitle(f"{ID}", fontweight = 'bold', size = 10, x = 0.52, y = 0.77)
plot_before_and_after(ID = "TDR02_20161209_161439", df = train_df)
<Figure size 1600x1600 with 0 Axes>
Mask image to bounding box¶
ééØåå°č³ęéäøē mask č½ęēŗäøå „樔åęéē bounding boxļ¼äø¦ē«åŗ8種é”å„ēč½ęå¾å½±åćč½ęå¾å½±åå bounding boxļ¼ä»„å mask å½±åļ¼
def mask_to_bbox(mark_path):
img = np.array(Image.open(mark_path))
mask = img != 0
sk_mask = sk_label(mask, connectivity = 2)
regions = sk_regions(sk_mask)
bboxes = []
for region in regions:
if region.area < 3000 :
continue
bboxes.append(region.bbox)
ymin, xmin, ymax, xmax = bboxes[0]
return xmin, ymin, xmax, ymax
def plot_bbox_and_mark(df):
temp = df[df["category"].duplicated() == False]
rows, cols = 4, 2
fig = plt.figure(figsize = (16, 16))
grid = plt.GridSpec(rows, cols)
for i in range(rows * cols):
path = os.path.join(config.root, "train", temp.iloc[i, 5])
mark_path = os.path.join(config.root, "train", temp.iloc[i, 6])
_, _, after = X_ray_normalization(path, vmin = 0, vmax = 2.5)
if temp.iloc[i, 1] != "normal":
mark = np.array(Image.open(mark_path))
xmin, ymin, xmax, ymax = mask_to_bbox(mark_path)
else:
mark = np.zeros((after.shape[0], after.shape[1]))
xmin, ymin, xmax, ymax = 0, 0, 0, 0
bbox = patches.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, linewidth = 2,
edgecolor = "r", facecolor = 'none')
categories = fig.add_subplot(grid[i])
categories.set_title(f"{temp.iloc[i, 1]}\n", fontweight = 'semibold', size = 14)
categories.set_axis_off()
gs = gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec = grid[i])
ax = fig.add_subplot(gs[0])
ax.imshow(after, cmap = "gray")
ax.set_title("Image")
ax.axis("off")
ax = fig.add_subplot(gs[1], sharey = ax)
ax.imshow(after, cmap = "gray")
ax.add_patch(bbox)
ax.set_title("Image with bbox")
ax.axis("off")
ax = fig.add_subplot(gs[2], sharey = ax)
ax.imshow(mark, cmap = "gray")
ax.set_title("Mark")
ax.axis("off")
fig.patch.set_facecolor('white')
fig.suptitle("Images with bbox and marks of 8 categories\n", fontweight = 'bold', size = 16)
fig.tight_layout()
plot_bbox_and_mark(train_df)
ę„čå°č½ęåŗē bounding box åÆ«å „ training dataframe äøļ¼
def write_bbox(df):
all_xmin, all_ymin, all_xmax, all_ymax = [], [], [], []
for i in range(df.shape[0]):
if df.iloc[i, 1] != "normal":
mark_path = os.path.join(config.root, "train", df.iloc[i, 6])
xmin, ymin, xmax, ymax = mask_to_bbox(mark_path)
else:
xmin, ymin, xmax, ymax = 0, 0, 0, 0
all_xmin.append(xmin)
all_ymin.append(ymin)
all_xmax.append(xmax)
all_ymax.append(ymax)
df["xmin"] = all_xmin
df["ymin"] = all_ymin
df["xmax"] = all_xmax
df["ymax"] = all_ymax
write_bbox(train_df)
train_df
| ID | category | Width | Height | Filename | ImagePath | MarkPath | xmin | ymin | xmax | ymax | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | TDR04_20180315_075734 | normal | 2328 | 2344 | 220_97.dcm | normal/image/220_97.dcm | normal/mark/220_97.dcm.jpg | 0 | 0 | 0 | 0 |
| 1 | TDR04_20180315_080518 | normal | 2472 | 2560 | 220_94.dcm | normal/image/220_94.dcm | normal/mark/220_94.dcm.jpg | 0 | 0 | 0 | 0 |
| 2 | TDR04_20180315_081322 | normal | 2312 | 2496 | 220_93.dcm | normal/image/220_93.dcm | normal/mark/220_93.dcm.jpg | 0 | 0 | 0 | 0 |
| 3 | TDR04_20180315_081746 | normal | 2448 | 2584 | 220_92.dcm | normal/image/220_92.dcm | normal/mark/220_92.dcm.jpg | 0 | 0 | 0 | 0 |
| 4 | TDR04_20180315_082113 | normal | 2144 | 2384 | 220_91.dcm | normal/image/220_91.dcm | normal/mark/220_91.dcm.jpg | 0 | 0 | 0 | 0 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 446 | TDR02_20161209_161439 | scoliosis | 2376 | 2592 | 4440_5.dcm | scoliosis/image/4440_5.dcm | scoliosis/mark/4440_5.dcm.jpg | 1016 | 560 | 1432 | 2328 |
| 447 | TDR04_20180224_084933 | scoliosis | 2248 | 2600 | 4440_0.dcm | scoliosis/image/4440_0.dcm | scoliosis/mark/4440_0.dcm.jpg | 912 | 552 | 1408 | 2272 |
| 448 | TDR04_20180226_082354 | scoliosis | 2488 | 2456 | 4440.dcm | scoliosis/image/4440.dcm | scoliosis/mark/4440.dcm.jpg | 1016 | 464 | 1560 | 2184 |
| 449 | TDR01_20171106_095308 | scoliosis | 2320 | 2376 | A0_29.dcm | scoliosis/image/A0_29.dcm | scoliosis/mark/A0_29.dcm.jpg | 1032 | 512 | 1384 | 2016 |
| 450 | TDR01_20171108_100516 | scoliosis | 2280 | 2288 | A0_28.dcm | scoliosis/image/A0_28.dcm | scoliosis/mark/A0_28.dcm.jpg | 912 | 592 | 1392 | 2120 |
451 rows Ć 11 columns
Write class id¶
å ä¹å¾č½ęč³ęę ¼å¼ęéļ¼ęåéč¦ęē¾ē
é”å„ę¹åÆ«ēŗ class_idļ¼ä¹å°±ęÆ 0 ~ 7 ēęøåć
labels = list(train_df["category"].unique())
label2class = {l: c for c, l in enumerate(labels)}
label2class
{'normal': 0,
'aortic_curvature': 1,
'aortic_atherosclerosis_calcification': 2,
'cardiac_hypertrophy': 3,
'intercostal_pleural_thickening': 4,
'lung_field_infiltration': 5,
'degenerative_joint_disease_of_the_thoracic_spine': 6,
'scoliosis': 7}
# write class_id
def write_class_id(df):
class_id = []
for i in range(df.shape[0]):
class_id.append(label2class[df.iloc[i, 1]])
df["class_id"] = class_id
write_class_id(train_df)
train_df
| ID | category | Width | Height | Filename | ImagePath | MarkPath | xmin | ymin | xmax | ymax | class_id | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | TDR04_20180315_075734 | normal | 2328 | 2344 | 220_97.dcm | normal/image/220_97.dcm | normal/mark/220_97.dcm.jpg | 0 | 0 | 0 | 0 | 0 |
| 1 | TDR04_20180315_080518 | normal | 2472 | 2560 | 220_94.dcm | normal/image/220_94.dcm | normal/mark/220_94.dcm.jpg | 0 | 0 | 0 | 0 | 0 |
| 2 | TDR04_20180315_081322 | normal | 2312 | 2496 | 220_93.dcm | normal/image/220_93.dcm | normal/mark/220_93.dcm.jpg | 0 | 0 | 0 | 0 | 0 |
| 3 | TDR04_20180315_081746 | normal | 2448 | 2584 | 220_92.dcm | normal/image/220_92.dcm | normal/mark/220_92.dcm.jpg | 0 | 0 | 0 | 0 | 0 |
| 4 | TDR04_20180315_082113 | normal | 2144 | 2384 | 220_91.dcm | normal/image/220_91.dcm | normal/mark/220_91.dcm.jpg | 0 | 0 | 0 | 0 | 0 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 446 | TDR02_20161209_161439 | scoliosis | 2376 | 2592 | 4440_5.dcm | scoliosis/image/4440_5.dcm | scoliosis/mark/4440_5.dcm.jpg | 1016 | 560 | 1432 | 2328 | 7 |
| 447 | TDR04_20180224_084933 | scoliosis | 2248 | 2600 | 4440_0.dcm | scoliosis/image/4440_0.dcm | scoliosis/mark/4440_0.dcm.jpg | 912 | 552 | 1408 | 2272 | 7 |
| 448 | TDR04_20180226_082354 | scoliosis | 2488 | 2456 | 4440.dcm | scoliosis/image/4440.dcm | scoliosis/mark/4440.dcm.jpg | 1016 | 464 | 1560 | 2184 | 7 |
| 449 | TDR01_20171106_095308 | scoliosis | 2320 | 2376 | A0_29.dcm | scoliosis/image/A0_29.dcm | scoliosis/mark/A0_29.dcm.jpg | 1032 | 512 | 1384 | 2016 | 7 |
| 450 | TDR01_20171108_100516 | scoliosis | 2280 | 2288 | A0_28.dcm | scoliosis/image/A0_28.dcm | scoliosis/mark/A0_28.dcm.jpg | 912 | 592 | 1392 | 2120 | 7 |
451 rows Ć 12 columns
Split training set and validation set¶
é裔č¦ę³ØęēęÆļ¼ē±ę¼äøå¼µå½±ååÆč½å
å«čرå¤é”äøåē¾ē
( multi-label ) ļ¼ę仄åØåå training set č· validation set ęäøč½ēØäøč¬ē train_test_split ļ¼å¦åęå°č“é”å„äøå¹³č””ćę¤å¤ļ¼ē±ę¼äøåē
äŗŗåØ dataframe äøåÆč½ęå¤ēč³ęļ¼ę仄åēęåčØå¾č¦ēØ ID å»åļ¼
é¦å ęē¾ē é”å„ę¹åÆ«ēŗ one-hot encoding å½¢å¼ļ¼
train_df.nunique()['ID'], train_df.shape[0]
(348, 451)
binarizer = MultiLabelBinarizer()
disease_id = []
for ID in train_df.ID.unique():
diseases = []
temp = train_df[train_df["ID"] == ID]
diseases.extend(list(temp["class_id"]))
disease_id.append(diseases)
one_hot = binarizer.fit_transform(disease_id)
one_hot
array([[1, 0, 0, ..., 0, 0, 0],
[1, 0, 0, ..., 0, 0, 0],
[1, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 1],
[0, 0, 0, ..., 0, 0, 1],
[0, 0, 0, ..., 0, 0, 1]])
one_hot.shape
(348, 8)
train_ID, train_label, val_ID, val_label = iterative_train_test_split(np.expand_dims(train_df["ID"].unique(), axis = 1), one_hot, test_size = 0.2)
training = train_df[train_df["ID"].isin(train_ID.ravel())]
validation = train_df[train_df["ID"].isin(val_ID.ravel())]
Dataset to COCO format¶
https://cocodataset.org/#format-data
大éØå object detection 樔åč¦ę±äøå „ēč³ęę ¼å¼ēŗ Pascal VOC format ę COCO formatļ¼é裔示ēÆå¦ä½å°č³ęč½ęēŗ COCO formatć
COCO dataset å å«äŗåéØåļ¼åå„ęÆļ¼
info: general information about the datasetlicenses: license information for the images in the datasetimages: a list of images in the datasetannotations: a list of annotations ( including bounding boxes ) that are present in all images in the datasetcategories: a list of label categories
å
¶äø info å licenses äø¦éåæ
č¦ļ¼ę仄é裔ęååŖåµå»ŗ imagesćannotations å categories éäøéØå
categories = []
for l, c in label2class.items():
if l == "normal":
continue
categories.append({"id": c, "name": l})
categories
[{'id': 1, 'name': 'aortic_curvature'},
{'id': 2, 'name': 'aortic_atherosclerosis_calcification'},
{'id': 3, 'name': 'cardiac_hypertrophy'},
{'id': 4, 'name': 'intercostal_pleural_thickening'},
{'id': 5, 'name': 'lung_field_infiltration'},
{'id': 6, 'name': 'degenerative_joint_disease_of_the_thoracic_spine'},
{'id': 7, 'name': 'scoliosis'}]
Format:¶
A dictionary containing 3 key items below, an individual item consists of a list of dictionaries
https://www.v7labs.com/blog/coco-dataset-guide
Images¶
- id
- file_name
- height and width
Annotations¶
- id: annotation unique id
- image_id: The ID of image(id under Images) that this annotation belongs to.
- category_id: ID of the object's category.
- bbox: Bounding box [x, y, width, height]
- segmentation: Segmentation polygons for the object.
- area: Area of the object (used for filtering).
- iscrowd: Flag indicating if the annotation is for a crowd (e.g., a dense cluster of objects).
Categories¶
Bounding boxes format¶

# change data to coco format
def coco_format(df, categories):
coco_output = {
"images" : [],
"categories" : [],
"annotations" : []
}
coco_output['categories'] = categories
annotation_id = 0
for image_id, img_name in enumerate(df.ID.unique()):
image_df = df[df.ID == img_name]
if len(image_df) == 1:
image_dict = {
"file_name" : list(image_df.category)[0] + "/" + list(image_df.Filename)[0].replace(".dcm", ".jpg"),
"height" : int(image_df.Height),
"width" : int(image_df.Width),
"id" : image_id
}
else:
unique = image_df.iloc[0, :]
image_dict = {
"file_name" : unique.category + "/" + unique.Filename.replace(".dcm", ".jpg"),
"height" : int(unique.Height),
"width" : int(unique.Width),
"id" : image_id
}
coco_output['images'].append(image_dict)
for _, row in image_df.iterrows():
xmin = int(row.xmin)
ymin = int(row.ymin)
xmax = int(row.xmax)
ymax = int(row.ymax)
if xmin == ymin == xmax == ymax == 0:
continue
area = (xmax - xmin) * (ymax - ymin)
poly = [
(xmin, ymin), (xmax, ymin),
(xmax, ymax), (xmin, ymax)
]
poly = list(itertools.chain.from_iterable(poly))
mask_dict = {
"id" : annotation_id,
"image_id" : image_id,
"category_id" : row.class_id,
"bbox" : [xmin, ymin, (xmax - xmin), (ymax - ymin)],
"area" : area,
"iscrowd" : 0,
"segmentation" : [poly],
}
coco_output["annotations"].append(mask_dict)
annotation_id += 1
return coco_output
train_coco = coco_format(training, categories)
val_coco = coco_format(validation, categories)
train_coco['images'][:5]
[{'file_name': 'normal/220_94.jpg', 'height': 2560, 'width': 2472, 'id': 0},
{'file_name': 'normal/220_93.jpg', 'height': 2496, 'width': 2312, 'id': 1},
{'file_name': 'normal/220_90.jpg', 'height': 2632, 'width': 2320, 'id': 2},
{'file_name': 'normal/220_88.jpg', 'height': 2624, 'width': 2560, 'id': 3},
{'file_name': 'normal/220_86.jpg', 'height': 2632, 'width': 2544, 'id': 4}]
train_coco['annotations'][:5]
[{'id': 0,
'image_id': 64,
'category_id': 1,
'bbox': [904, 656, 760, 1143],
'area': 868680,
'iscrowd': 0,
'segmentation': [[904, 656, 1664, 656, 1664, 1799, 904, 1799]]},
{'id': 1,
'image_id': 64,
'category_id': 5,
'bbox': [232, 128, 2200, 2024],
'area': 4452800,
'iscrowd': 0,
'segmentation': [[232, 128, 2432, 128, 2432, 2152, 232, 2152]]},
{'id': 2,
'image_id': 65,
'category_id': 1,
'bbox': [872, 736, 632, 1072],
'area': 677504,
'iscrowd': 0,
'segmentation': [[872, 736, 1504, 736, 1504, 1808, 872, 1808]]},
{'id': 3,
'image_id': 65,
'category_id': 5,
'bbox': [144, 280, 2168, 1952],
'area': 4231936,
'iscrowd': 0,
'segmentation': [[144, 280, 2312, 280, 2312, 2232, 144, 2232]]},
{'id': 4,
'image_id': 66,
'category_id': 1,
'bbox': [1024, 528, 608, 1040],
'area': 632320,
'iscrowd': 0,
'segmentation': [[1024, 528, 1632, 528, 1632, 1568, 1024, 1568]]}]
train_coco['categories']
[{'id': 1, 'name': 'aortic_curvature'},
{'id': 2, 'name': 'aortic_atherosclerosis_calcification'},
{'id': 3, 'name': 'cardiac_hypertrophy'},
{'id': 4, 'name': 'intercostal_pleural_thickening'},
{'id': 5, 'name': 'lung_field_infiltration'},
{'id': 6, 'name': 'degenerative_joint_disease_of_the_thoracic_spine'},
{'id': 7, 'name': 'scoliosis'}]
Save files¶
é裔å°ē¶é normalization čēå¾ēå½±ååēŗ .jpg ęŖļ¼äø¦å°č½ęēŗ COCO formatå¾ēč³ęåēŗ .json ęŖļ¼ę¹ä¾æä¹å¾ä½æēØļ¼
def dcm_to_jpg(df):
for path in df.ImagePath:
dcm_path = os.path.join(config.root, "train", path)
_, _, image = X_ray_normalization(dcm_path, vmin = 0, vmax = 2.5)
file = os.path.join("/kaggle/working/train", path.split("/")[0])
# file = os.path.join(config.root,"kaggle/working/train", path.split("/")[0])
jpg_name = path.split("/")[-1].replace(".dcm", ".jpg")
if os.path.isdir(file) == False:
os.makedirs(file)
print("makedirs")
plt.imsave(f"{file}/{jpg_name}", image, cmap = "gray")
dcm_to_jpg(train_df)
makedirs makedirs makedirs makedirs makedirs makedirs makedirs makedirs
with open("train.json", "w") as outfile:
json.dump(train_coco, outfile)
with open("val.json", "w") as outfile:
json.dump(val_coco, outfile)
TO DO:¶
- Preparing for testing dataset (jpg)
def Testdcm_to_jpg(df):
for path in df.ImagePath:
path=path.lstrip("/")
dcm_path = os.path.join(config.root, "test", path)
_, _, image = X_ray_normalization(dcm_path, vmin = 0, vmax = 2.5)
file = os.path.join("/kaggle/working/test", path.split("/")[0])
# file = os.path.join(config.root,"/kaggle/working/test", path.split("/")[0])
jpg_name = path.split("/")[-1].replace(".dcm", ".jpg")
if os.path.isdir(file) == False:
os.makedirs(file)
print("makedirs")
plt.imsave(f"{file}/{jpg_name}", image, cmap = "gray")
test_df = pd.read_csv("/kaggle/input/hwk05-data/hwk05_data/test/test.csv", encoding='big5')
Testdcm_to_jpg(test_df)
makedirs
hw5-3¶
# import libraries
# basic
#import warnings
#warnings.filterwarnings('ignore')
import os
os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':4096:8'
import random
import numpy as np
import pandas as pd
import math
from tqdm.notebook import tqdm
# visualization
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# PyTorch
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import models
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.rpn import AnchorGenerator
from torchvision.models.detection import FasterRCNN
from torchvision.transforms import v2
from torchvision import tv_tensors
from torchvision.tv_tensors import BoundingBoxes
# object detection
!pip install pycocotools
import pycocotools
from pycocotools.coco import COCO
Requirement already satisfied: pycocotools in /usr/local/lib/python3.10/dist-packages (2.0.8) Requirement already satisfied: matplotlib>=2.1.0 in /usr/local/lib/python3.10/dist-packages (from pycocotools) (3.7.1) Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from pycocotools) (1.26.4) Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (1.3.0) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (0.12.1) Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (4.53.1) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (1.4.7) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (24.1) Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (10.4.0) Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (3.1.4) Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools) (2.8.2) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib>=2.1.0->pycocotools) (1.16.0)
!wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/engine.py
!wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/utils.py
!wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/coco_utils.py
!wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/coco_eval.py
!wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/transforms.py
from engine import evaluate
--2024-12-25 08:30:18-- https://raw.githubusercontent.com/pytorch/vision/main/references/detection/engine.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 4063 (4.0K) [text/plain] Saving to: āengine.pyā engine.py 100%[===================>] 3.97K --.-KB/s in 0s 2024-12-25 08:30:18 (44.1 MB/s) - āengine.pyā saved [4063/4063] --2024-12-25 08:30:18-- https://raw.githubusercontent.com/pytorch/vision/main/references/detection/utils.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 8388 (8.2K) [text/plain] Saving to: āutils.pyā utils.py 100%[===================>] 8.19K --.-KB/s in 0s 2024-12-25 08:30:18 (73.1 MB/s) - āutils.pyā saved [8388/8388] --2024-12-25 08:30:18-- https://raw.githubusercontent.com/pytorch/vision/main/references/detection/coco_utils.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 8397 (8.2K) [text/plain] Saving to: ācoco_utils.pyā coco_utils.py 100%[===================>] 8.20K --.-KB/s in 0s 2024-12-25 08:30:18 (59.4 MB/s) - ācoco_utils.pyā saved [8397/8397] --2024-12-25 08:30:19-- https://raw.githubusercontent.com/pytorch/vision/main/references/detection/coco_eval.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 6447 (6.3K) [text/plain] Saving to: ācoco_eval.pyā coco_eval.py 100%[===================>] 6.30K --.-KB/s in 0s 2024-12-25 08:30:19 (66.9 MB/s) - ācoco_eval.pyā saved [6447/6447] --2024-12-25 08:30:19-- https://raw.githubusercontent.com/pytorch/vision/main/references/detection/transforms.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.108.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 23628 (23K) [text/plain] Saving to: ātransforms.pyā transforms.py 100%[===================>] 23.07K --.-KB/s in 0.001s 2024-12-25 08:30:19 (15.8 MB/s) - ātransforms.pyā saved [23628/23628]
## TODO: Prepare your own information
class config2:
## roots for training & valid
root = "/kaggle/working/train"
info_root = "/kaggle/working"
save_root = "/kaggle/working"
## for test images
test_root = '/kaggle/working/test'
info_root_test = '/kaggle/input/hwk05-data/hwk05_data/train'
num_classes = 8 #(for fasterrcnn: background + # of classes): 1+7=8
batch_size = 8
epochs = 20
weight_decay = 1e-4
lr = 0.005
momentum = 0.9
seed = 42
workers = 8
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def seed_everything(seed):
random.seed(seed) # Set Python random seed
np.random.seed(seed) # Set NumPy random seed
torch.manual_seed(seed) # Set PyTorch random seed for CPU and GPU
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# Set PyTorch deterministic operations for cudnn backend
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
seed_everything(config2.seed)
## TO DO: Augmentation?
class medTransform:
def __init__(self, train=False):
if train:
self.transforms = v2.Compose(
[
v2.ToImage(), # Used while using PIL image
v2.RandomRotation(degrees=5),
v2.ToDtype(torch.float32, scale=True),
]
)
else:
self.transforms = v2.Compose(
[
v2.ToImage(), # Used while using PIL image
v2.ToDtype(torch.float32, scale=True),
]
)
def __call__(self, x, bboxes):
if isinstance(x, torch.Tensor):
height, width = x.shape[-2], x.shape[-1] # (C, H, W) format
else: # Assuming x is a PIL Image
width, height = x.size
# Loading format is COCO bboxes[x,y,w,h]
bboxes = tv_tensors.BoundingBoxes(bboxes, format="XYWH", canvas_size=(height, width))
return self.transforms(x, bboxes)
class MedDataset(Dataset):
def __init__(self, root, info_root, split, transforms = None):
self.split = split
self.root = root
self.info_root = info_root
self.transforms = transforms
self.coco = COCO(os.path.join(self.info_root, f"{self.split}.json"))
self.ids = list(sorted(self.coco.imgs.keys()))
def get_image(self, img_id: int):
image_path = os.path.join(self.root,self.coco.loadImgs(img_id)[0]['file_name'])
image = Image.open(image_path).convert("RGB")
return image
def get_annotation(self, img_id: int):
return self.coco.loadAnns(self.coco.getAnnIds(img_id))
def __getitem__(self, index):
normal = False
img_id = self.ids[index]
image = self.get_image(img_id)
annotation = self.get_annotation(img_id)
bboxes = [a['bbox'] for a in annotation]
category_ids = [a['category_id'] for a in annotation]
if bboxes == []:
normal = True
if self.transforms:
image, bboxes = self.transforms(image, bboxes)
def reformat_bboxes(boxes):
return [[val[0], val[1], val[0] + val[2], val[1] + val[3]] for val in boxes]
if normal != True:
## Recall that the original format is COCO
bboxes = reformat_bboxes(bboxes)
def create_target(bboxes, normal):
if normal:
return {
'boxes': torch.zeros((0, 4), dtype=torch.float32), # Empty boxes
'labels': torch.tensor(category_ids, dtype=torch.int64),
'image_id': img_id,
'area': torch.zeros((0,), dtype=torch.float32), # Empty areas
'iscrowd': torch.zeros((0,), dtype=torch.int64), # Empty tensor for iscrowd
}
else:
return {
'boxes': torch.tensor(bboxes),
'labels': torch.tensor(category_ids, dtype=torch.int64),
'image_id': img_id,
'area': torch.tensor([(bbox[2] - bbox[0]) * (bbox[3] - bbox[1]) for bbox in bboxes], dtype=torch.float32),
'iscrowd': torch.tensor([a['iscrowd'] for a in annotation], dtype=torch.int64)
}
targets = create_target(bboxes,normal)
return image, targets
def __len__(self):
return len(self.ids)
def collate_fn(batch: list[torch.tensor, dict]):
return tuple(zip(*batch))
def plot_image_with_boxes(image_tensor, boxes_dict):
image_np = image_tensor.permute(1, 2, 0).numpy()
fig, ax = plt.subplots(1)
# Display the image
ax.imshow(image_np)
for box in boxes_dict['boxes']:
# Extract coordinates (x0, y0, x1, y1)
x0, y0, x1, y1 = box
# Calculate the height as (y0 - y1) since y0 is the top and y1 is the bottom
height = y1 - y0
# Create a rectangle patch with (x0, y0) as the top-left corner
rect = patches.Rectangle((x0, y0), x1 - x0, height, linewidth=2, edgecolor='r', facecolor='none')
ax.add_patch(rect)
plt.show()
def fasterrcnn(num_classes):
model = models.detection.fasterrcnn_resnet50_fpn(weights='COCO_V1')
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = None
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return model
def train_one_epoch(model, train_loader, optimizer, epoch, device):
model.train()
train_loss = []
train_loss_dict = []
lr_scheduler = None
for images, targets in tqdm(train_loader):
images = [image.to(device) for image in images]
targets = [{k: (torch.tensor(v,device=device) if not isinstance(v, torch.Tensor) else v.to(device)) for k, v in t.items()} for t in targets]
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
batch_loss_value = losses.item()
batch_loss_dict = {k: v.item() for k, v in loss_dict.items()}
train_loss.append(batch_loss_value)
train_loss_dict.append(batch_loss_dict)
optimizer.zero_grad()
losses.backward()
optimizer.step()
if lr_scheduler is not None:
lr_scheduler.step()
train_loss = np.mean(train_loss)
train_loss_dict = pd.DataFrame(train_loss_dict).mean()
train_loss_classifier = train_loss_dict.loss_classifier
train_loss_box_reg = train_loss_dict.loss_box_reg
train_loss_rpn_box_reg = train_loss_dict.loss_rpn_box_reg
train_loss_objectness = train_loss_dict.loss_objectness
return train_loss, train_loss_classifier, train_loss_box_reg, train_loss_rpn_box_reg, train_loss_objectness
def validation(model, val_loader, device):
model.train()
#model.eval()
for m in model.modules():
if isinstance(m, torchvision.ops.Conv2dNormActivation):
m.eval()
if isinstance(m, torchvision.ops.FrozenBatchNorm2d):
m.eval()
if isinstance(m, torch.nn.BatchNorm2d):
m.eval()
val_loss = []
val_loss_dict = []
with torch.no_grad():
for images, targets in tqdm(val_loader):
images = [image.to(device) for image in images]
targets = [{k: (torch.tensor(v,device=device) if not isinstance(v, torch.Tensor) else v.to(device)) for k, v in t.items()} for t in targets]
loss = model(images, targets)
total_loss = sum(l for l in loss.values())
loss_value = total_loss.item()
loss_dict = {k: v.item() for k, v in loss.items()}
val_loss.append(loss_value)
val_loss_dict.append(loss_dict)
val_loss = np.mean(val_loss)
val_loss_dict = pd.DataFrame(val_loss_dict).mean()
val_loss_classifier = val_loss_dict.loss_classifier
val_loss_box_reg = val_loss_dict.loss_box_reg
val_loss_rpn_box_reg = val_loss_dict.loss_rpn_box_reg
val_loss_objectness = val_loss_dict.loss_objectness
return val_loss, val_loss_classifier, val_loss_box_reg, val_loss_rpn_box_reg, val_loss_objectness
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
def main():
seed_everything(config2.seed)
g = torch.Generator()
g.manual_seed(config2.seed)
train_dataset = MedDataset(root = config2.root, info_root = config2.info_root, split = "train", transforms = medTransform(train=True))
val_dataset = MedDataset(root = config2.root, info_root = config2.info_root, split = "val", transforms = medTransform(train=False))
train_loader = DataLoader(train_dataset, batch_size = config2.batch_size, shuffle = True,
num_workers=config2.workers, collate_fn = collate_fn,pin_memory=True
)
val_loader = DataLoader(val_dataset, batch_size = config2.batch_size, shuffle = False,
num_workers=config2.workers, worker_init_fn=seed_worker,
generator=g, collate_fn = collate_fn,pin_memory=True
)
device = config2.device
model = fasterrcnn(num_classes = config2.num_classes)
model.to(device)
parameters = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(parameters, lr = config2.lr, momentum = config2.momentum, nesterov = True, weight_decay = config2.weight_decay)
best_val_loss = float("inf")
best_map50 = 0.0
history = {
"train": {
"loss": [],
"loss_classifier": [],
"loss_box_reg": [],
"loss_rpn_box_reg": [],
"loss_objectness": []
},
"val": {
"loss": [],
"loss_classifier": [],
"loss_box_reg": [],
"loss_rpn_box_reg": [],
"loss_objectness": []
},
"map50":{
"train":[],
"valid":[],
}
}
best_idx = 0
print('start')
for epoch in range(config2.epochs):
print()
train_loss, train_loss_classifier, train_loss_box_reg, train_loss_rpn_box_reg, train_loss_objectness = train_one_epoch(
model, train_loader, optimizer, epoch, device,
)
val_loss, val_loss_classifier, val_loss_box_reg, val_loss_rpn_box_reg, val_loss_objectness = validation(
model, val_loader, device
)
## Training
history["train"]["loss"].append(train_loss)
history["train"]["loss_classifier"].append(train_loss_classifier)
history["train"]["loss_box_reg"].append(train_loss_box_reg)
history["train"]["loss_rpn_box_reg"].append(train_loss_rpn_box_reg)
history["train"]["loss_objectness"].append(train_loss_objectness)
## Validation
history["val"]["loss"].append(val_loss)
history["val"]["loss_classifier"].append(val_loss_classifier)
history["val"]["loss_box_reg"].append(val_loss_box_reg)
history["val"]["loss_rpn_box_reg"].append(val_loss_rpn_box_reg)
history["val"]["loss_objectness"].append(val_loss_objectness)
print(f'Epoch: {epoch+1}/{config2.epochs} | LR: {optimizer.state_dict()["param_groups"][0]["lr"]:.6f}')
print("*****Training*****")
print(f'Loss: {train_loss:.4f} | Classifier Loss: {train_loss_classifier:.4f} | Box Reg Loss: {train_loss_box_reg:.4f} | RPN Box Reg Loss: {train_loss_rpn_box_reg:.4f} | Objectness Loss: {train_loss_objectness:.4f}')
train_evaluator = evaluate(model, train_loader, device = device)
print("*****Validation*****")
print(f'Loss: {val_loss:.4f} | Classifier Loss: {val_loss_classifier:.4f} | Box Reg Loss: {val_loss_box_reg:.4f} | RPN Box Reg Loss: {val_loss_rpn_box_reg:.4f} | Objectness Loss: {val_loss_objectness:.4f}')
valid_evaluator = evaluate(model, val_loader, device = device)
train_map50 = train_evaluator.coco_eval['bbox'].stats[1]
valid_map50 = valid_evaluator.coco_eval['bbox'].stats[1]
history["map50"]["train"].append(train_map50)
history["map50"]["valid"].append(valid_map50)
## TODO save your model
if valid_map50 > best_map50:
best_map50 = valid_map50
save_file = {
"model": model.state_dict(),
"optimizer": optimizer.state_dict(),
"epoch": epoch,
"args": config2
}
best_idx=epoch
torch.save(save_file, os.path.join(config2.save_root,"final.pth"))
print(f'Best epoch in {best_idx+1}')
## Evaluation result
plt.figure(figsize = (12, 5))
plt.subplot(1, 2, 1)
plt.plot(range(config2.epochs), history["map50"]["train"], label = 'Training map50')
plt.plot(range(config2.epochs), history["map50"]["valid"], label = 'Validation map50')
plt.xlabel('Epoch')
plt.ylabel('map')
plt.legend()
plt.title('Validation and Testing map50')
plt.show()
plt.figure(figsize = (12, 5))
plt.subplot(1, 2, 1)
plt.plot(range(config2.epochs), history["train"]["loss"], label = 'Training Loss')
plt.plot(range(config2.epochs), history["val"]["loss"], label = 'Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.title('Training and Validation Loss Curves')
plt.show()
plt.figure(figsize = (12, 5))
plt.subplot(1, 2, 1)
plt.plot(range(config2.epochs), history["train"]["loss_classifier"], label = 'Training Classifier Loss')
plt.plot(range(config2.epochs), history["val"]["loss_classifier"], label = 'Validation Classifier Loss')
plt.xlabel('Epoch')
plt.ylabel('Classifier Loss')
plt.legend()
plt.title('Training and Validation Classifier Loss Curves')
plt.show()
plt.figure(figsize = (12, 5))
plt.subplot(1, 2, 1)
plt.plot(range(config2.epochs), history["train"]["loss_box_reg"], label = 'Training Box Reg Loss')
plt.plot(range(config2.epochs), history["val"]["loss_box_reg"], label = 'Validation Box Reg Loss')
plt.xlabel('Epoch')
plt.ylabel('Box Reg Loss')
plt.legend()
plt.title('Training and Validation Box Reg Loss Curves')
plt.show()
plt.figure(figsize = (12, 5))
plt.subplot(1, 2, 1)
plt.plot(range(config2.epochs), history["train"]["loss_rpn_box_reg"], label = 'Training RPN Box Reg Loss')
plt.plot(range(config2.epochs), history["val"]["loss_rpn_box_reg"], label = 'Validation RPN Box Reg Loss')
plt.xlabel('Epoch')
plt.ylabel('RPN Box Reg Loss')
plt.legend()
plt.title('Training and Validation RPN Box Reg Loss Curves')
plt.show()
plt.figure(figsize = (12, 5))
plt.subplot(1, 2, 1)
plt.plot(range(config2.epochs), history["train"]["loss_objectness"], label = 'Training Objectness Loss')
plt.plot(range(config2.epochs), history["val"]["loss_objectness"], label = 'Validation Objectness Loss')
plt.xlabel('Epoch')
plt.ylabel('Objectness Loss')
plt.legend()
plt.title('Training and Validation Objectness Loss Curves')
plt.show()
## IMAGENET 3
if __name__ == "__main__":
main()
loading annotations into memory... Done (t=0.00s) creating index... index created! loading annotations into memory... Done (t=0.00s) creating index... index created!
Downloading: "https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth" to /root/.cache/torch/hub/checkpoints/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth 100%|āāāāāāāāāā| 160M/160M [00:00<00:00, 197MB/s]
start
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 1/20 | LR: 0.005000 *****Training***** Loss: 0.3138 | Classifier Loss: 0.1968 | Box Reg Loss: 0.0935 | RPN Box Reg Loss: 0.0089 | Objectness Loss: 0.0145 creating index... index created! Test: [ 0/35] eta: 0:12:50 model_time: 1.2099 (1.2099) evaluator_time: 0.1441 (0.1441) time: 22.0015 data: 20.5881 max mem: 11900 Test: [34/35] eta: 0:00:02 model_time: 0.8543 (0.8925) evaluator_time: 0.0360 (0.0686) time: 1.9947 data: 1.0389 max mem: 11900 Test: Total time: 0:01:30 (2.5810 s / it) Averaged stats: model_time: 0.8543 (0.8925) evaluator_time: 0.0360 (0.0686) Accumulating evaluation results... DONE (t=0.19s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.033 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.100 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.015 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.033 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.119 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.269 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.269 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.269 *****Validation***** Loss: 0.1989 | Classifier Loss: 0.0990 | Box Reg Loss: 0.0825 | RPN Box Reg Loss: 0.0065 | Objectness Loss: 0.0109 creating index... index created! Test: [ 0/10] eta: 0:01:21 model_time: 0.8241 (0.8241) evaluator_time: 0.0099 (0.0099) time: 8.1230 data: 7.2376 max mem: 11900 Test: [ 9/10] eta: 0:00:01 model_time: 0.8241 (0.7459) evaluator_time: 0.0136 (0.0122) time: 1.5261 data: 0.7240 max mem: 11900 Test: Total time: 0:00:15 (1.5365 s / it) Averaged stats: model_time: 0.8241 (0.7459) evaluator_time: 0.0136 (0.0122) Accumulating evaluation results... DONE (t=0.06s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.024 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.082 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.009 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.024 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.103 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.259 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.262 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.262
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 2/20 | LR: 0.005000 *****Training***** Loss: 0.2156 | Classifier Loss: 0.1035 | Box Reg Loss: 0.0991 | RPN Box Reg Loss: 0.0068 | Objectness Loss: 0.0062 creating index... index created! Test: [ 0/35] eta: 0:13:27 model_time: 0.9888 (0.9888) evaluator_time: 0.0900 (0.0900) time: 23.0640 data: 21.9080 max mem: 11900 Test: [34/35] eta: 0:00:02 model_time: 0.8700 (0.8832) evaluator_time: 0.0192 (0.0543) time: 1.7739 data: 0.8249 max mem: 11900 Test: Total time: 0:01:27 (2.5023 s / it) Averaged stats: model_time: 0.8700 (0.8832) evaluator_time: 0.0192 (0.0543) Accumulating evaluation results... DONE (t=0.16s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.093 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.228 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.057 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.093 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.298 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.357 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.359 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.359 *****Validation***** Loss: 0.1860 | Classifier Loss: 0.0901 | Box Reg Loss: 0.0830 | RPN Box Reg Loss: 0.0073 | Objectness Loss: 0.0056 creating index... index created! Test: [ 0/10] eta: 0:01:20 model_time: 0.8165 (0.8165) evaluator_time: 0.0125 (0.0125) time: 8.0725 data: 7.1909 max mem: 11900 Test: [ 9/10] eta: 0:00:01 model_time: 0.8165 (0.7453) evaluator_time: 0.0123 (0.0119) time: 1.5206 data: 0.7194 max mem: 11900 Test: Total time: 0:00:15 (1.5344 s / it) Averaged stats: model_time: 0.8165 (0.7453) evaluator_time: 0.0123 (0.0119) Accumulating evaluation results... DONE (t=0.06s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.085 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.204 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.050 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.085 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.269 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.311 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.311 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.311
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 3/20 | LR: 0.005000 *****Training***** Loss: 0.2058 | Classifier Loss: 0.0945 | Box Reg Loss: 0.0998 | RPN Box Reg Loss: 0.0065 | Objectness Loss: 0.0049 creating index... index created! Test: [ 0/35] eta: 0:12:41 model_time: 1.0041 (1.0041) evaluator_time: 0.0942 (0.0942) time: 21.7537 data: 20.5573 max mem: 11900 Test: [34/35] eta: 0:00:02 model_time: 0.8578 (0.8756) evaluator_time: 0.0347 (0.0491) time: 1.8310 data: 0.8999 max mem: 11900 Test: Total time: 0:01:26 (2.4692 s / it) Averaged stats: model_time: 0.8578 (0.8756) evaluator_time: 0.0347 (0.0491) Accumulating evaluation results... DONE (t=0.17s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.129 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.292 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.079 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.129 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.431 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.495 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.495 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.495 *****Validation***** Loss: 0.1771 | Classifier Loss: 0.0845 | Box Reg Loss: 0.0804 | RPN Box Reg Loss: 0.0066 | Objectness Loss: 0.0056 creating index... index created! Test: [ 0/10] eta: 0:01:14 model_time: 0.8426 (0.8426) evaluator_time: 0.0158 (0.0158) time: 7.4701 data: 6.5657 max mem: 11900 Test: [ 9/10] eta: 0:00:01 model_time: 0.8426 (0.7667) evaluator_time: 0.0109 (0.0112) time: 1.4737 data: 0.6569 max mem: 11900 Test: Total time: 0:00:14 (1.4864 s / it) Averaged stats: model_time: 0.8426 (0.7667) evaluator_time: 0.0109 (0.0112) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.093 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.217 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.055 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.093 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.395 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.446 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.446 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.446
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 4/20 | LR: 0.005000 *****Training***** Loss: 0.1785 | Classifier Loss: 0.0843 | Box Reg Loss: 0.0837 | RPN Box Reg Loss: 0.0065 | Objectness Loss: 0.0040 creating index... index created! Test: [ 0/35] eta: 0:12:35 model_time: 1.0376 (1.0376) evaluator_time: 0.0841 (0.0841) time: 21.5740 data: 20.3918 max mem: 11900 Test: [34/35] eta: 0:00:02 model_time: 0.8933 (0.9104) evaluator_time: 0.0508 (0.0571) time: 1.7386 data: 0.7579 max mem: 11900 Test: Total time: 0:01:24 (2.4250 s / it) Averaged stats: model_time: 0.8933 (0.9104) evaluator_time: 0.0508 (0.0571) Accumulating evaluation results... DONE (t=0.14s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.166 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.347 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.142 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.166 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.530 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.563 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.563 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.563 *****Validation***** Loss: 0.1529 | Classifier Loss: 0.0759 | Box Reg Loss: 0.0665 | RPN Box Reg Loss: 0.0059 | Objectness Loss: 0.0045 creating index... index created! Test: [ 0/10] eta: 0:01:20 model_time: 0.8551 (0.8551) evaluator_time: 0.0115 (0.0115) time: 8.0962 data: 7.1835 max mem: 11900 Test: [ 9/10] eta: 0:00:01 model_time: 0.8551 (0.7928) evaluator_time: 0.0111 (0.0103) time: 1.5607 data: 0.7186 max mem: 11900 Test: Total time: 0:00:15 (1.5766 s / it) Averaged stats: model_time: 0.8551 (0.7928) evaluator_time: 0.0111 (0.0103) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.178 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.353 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.158 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.178 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.490 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.517 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.517 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.517
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 5/20 | LR: 0.005000 *****Training***** Loss: 0.1541 | Classifier Loss: 0.0771 | Box Reg Loss: 0.0678 | RPN Box Reg Loss: 0.0057 | Objectness Loss: 0.0035 creating index... index created! Test: [ 0/35] eta: 0:11:06 model_time: 1.1177 (1.1177) evaluator_time: 0.1332 (0.1332) time: 19.0372 data: 17.6993 max mem: 11900 Test: [34/35] eta: 0:00:02 model_time: 0.8901 (0.9071) evaluator_time: 0.0265 (0.0572) time: 1.8282 data: 0.8420 max mem: 11900 Test: Total time: 0:01:24 (2.4260 s / it) Averaged stats: model_time: 0.8901 (0.9071) evaluator_time: 0.0265 (0.0572) Accumulating evaluation results... DONE (t=0.16s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.169 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.325 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.149 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.169 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.564 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.593 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.593 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.593 *****Validation***** Loss: 0.1528 | Classifier Loss: 0.0787 | Box Reg Loss: 0.0609 | RPN Box Reg Loss: 0.0057 | Objectness Loss: 0.0075 creating index... index created! Test: [ 0/10] eta: 0:01:18 model_time: 0.8229 (0.8229) evaluator_time: 0.0179 (0.0179) time: 7.8497 data: 6.9572 max mem: 11900 Test: [ 9/10] eta: 0:00:01 model_time: 0.8229 (0.7596) evaluator_time: 0.0117 (0.0120) time: 1.5112 data: 0.6961 max mem: 11900 Test: Total time: 0:00:15 (1.5262 s / it) Averaged stats: model_time: 0.8229 (0.7596) evaluator_time: 0.0117 (0.0120) Accumulating evaluation results... DONE (t=0.06s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.173 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.374 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.152 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.173 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.479 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.494 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.494 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.494
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 6/20 | LR: 0.005000 *****Training***** Loss: 0.1491 | Classifier Loss: 0.0746 | Box Reg Loss: 0.0653 | RPN Box Reg Loss: 0.0052 | Objectness Loss: 0.0041 creating index... index created! Test: [ 0/35] eta: 0:12:35 model_time: 1.0235 (1.0235) evaluator_time: 0.0666 (0.0666) time: 21.5799 data: 20.4189 max mem: 11900 Test: [34/35] eta: 0:00:02 model_time: 0.8496 (0.8684) evaluator_time: 0.0292 (0.0441) time: 1.6991 data: 0.7735 max mem: 11900 Test: Total time: 0:01:24 (2.4087 s / it) Averaged stats: model_time: 0.8496 (0.8684) evaluator_time: 0.0292 (0.0441) Accumulating evaluation results... DONE (t=0.12s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.218 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.431 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.198 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.218 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.565 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.583 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.583 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.583 *****Validation***** Loss: 0.1455 | Classifier Loss: 0.0719 | Box Reg Loss: 0.0579 | RPN Box Reg Loss: 0.0058 | Objectness Loss: 0.0099 creating index... index created! Test: [ 0/10] eta: 0:01:17 model_time: 0.7977 (0.7977) evaluator_time: 0.0129 (0.0129) time: 7.7816 data: 6.9207 max mem: 11900 Test: [ 9/10] eta: 0:00:01 model_time: 0.7977 (0.7411) evaluator_time: 0.0114 (0.0104) time: 1.4876 data: 0.6923 max mem: 11900 Test: Total time: 0:00:15 (1.5032 s / it) Averaged stats: model_time: 0.7977 (0.7411) evaluator_time: 0.0114 (0.0104) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.200 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.410 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.204 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.200 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.519 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.522 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.522 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.522
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 7/20 | LR: 0.005000 *****Training***** Loss: 0.1432 | Classifier Loss: 0.0721 | Box Reg Loss: 0.0631 | RPN Box Reg Loss: 0.0053 | Objectness Loss: 0.0027 creating index... index created! Test: [ 0/35] eta: 0:13:21 model_time: 0.9493 (0.9493) evaluator_time: 0.0532 (0.0532) time: 22.8997 data: 21.8394 max mem: 11910 Test: [34/35] eta: 0:00:02 model_time: 0.8689 (0.8648) evaluator_time: 0.0449 (0.0778) time: 1.8894 data: 0.9524 max mem: 11910 Test: Total time: 0:01:28 (2.5203 s / it) Averaged stats: model_time: 0.8689 (0.8648) evaluator_time: 0.0449 (0.0778) Accumulating evaluation results... DONE (t=0.13s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.229 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.431 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.217 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.229 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.574 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.585 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.585 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.585 *****Validation***** Loss: 0.1385 | Classifier Loss: 0.0669 | Box Reg Loss: 0.0593 | RPN Box Reg Loss: 0.0058 | Objectness Loss: 0.0065 creating index... index created! Test: [ 0/10] eta: 0:01:18 model_time: 0.8034 (0.8034) evaluator_time: 0.0116 (0.0116) time: 7.8588 data: 6.9932 max mem: 11910 Test: [ 9/10] eta: 0:00:01 model_time: 0.8034 (0.7403) evaluator_time: 0.0109 (0.0101) time: 1.4939 data: 0.6996 max mem: 11910 Test: Total time: 0:00:15 (1.5094 s / it) Averaged stats: model_time: 0.8034 (0.7403) evaluator_time: 0.0109 (0.0101) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.176 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.390 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.110 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.176 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.462 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.466 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.466 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.466
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 8/20 | LR: 0.005000 *****Training***** Loss: 0.1333 | Classifier Loss: 0.0673 | Box Reg Loss: 0.0578 | RPN Box Reg Loss: 0.0050 | Objectness Loss: 0.0031 creating index... index created! Test: [ 0/35] eta: 0:14:59 model_time: 0.8973 (0.8973) evaluator_time: 0.1252 (0.1252) time: 25.6924 data: 24.5989 max mem: 11910 Test: [34/35] eta: 0:00:02 model_time: 0.8767 (0.8702) evaluator_time: 0.0324 (0.0490) time: 1.7153 data: 0.7717 max mem: 11910 Test: Total time: 0:01:24 (2.4213 s / it) Averaged stats: model_time: 0.8767 (0.8702) evaluator_time: 0.0324 (0.0490) Accumulating evaluation results... DONE (t=0.13s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.251 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.456 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.277 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.251 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.588 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.602 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.602 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.602 *****Validation***** Loss: 0.1493 | Classifier Loss: 0.0764 | Box Reg Loss: 0.0632 | RPN Box Reg Loss: 0.0054 | Objectness Loss: 0.0043 creating index... index created! Test: [ 0/10] eta: 0:01:21 model_time: 0.8048 (0.8048) evaluator_time: 0.0139 (0.0139) time: 8.1651 data: 7.2953 max mem: 11910 Test: [ 9/10] eta: 0:00:01 model_time: 0.8048 (0.7428) evaluator_time: 0.0107 (0.0109) time: 1.5274 data: 0.7298 max mem: 11910 Test: Total time: 0:00:15 (1.5420 s / it) Averaged stats: model_time: 0.8048 (0.7428) evaluator_time: 0.0107 (0.0109) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.200 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.403 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.188 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.200 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.502 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.529 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.529 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.529
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 9/20 | LR: 0.005000 *****Training***** Loss: 0.1315 | Classifier Loss: 0.0660 | Box Reg Loss: 0.0578 | RPN Box Reg Loss: 0.0051 | Objectness Loss: 0.0026 creating index... index created! Test: [ 0/35] eta: 0:14:49 model_time: 1.0423 (1.0423) evaluator_time: 0.0307 (0.0307) time: 25.4032 data: 24.2627 max mem: 11910 Test: [34/35] eta: 0:00:02 model_time: 0.8270 (0.8658) evaluator_time: 0.0369 (0.0426) time: 1.6280 data: 0.7119 max mem: 11910 Test: Total time: 0:01:21 (2.3247 s / it) Averaged stats: model_time: 0.8270 (0.8658) evaluator_time: 0.0369 (0.0426) Accumulating evaluation results... DONE (t=0.11s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.266 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.500 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.262 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.266 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.584 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.591 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.591 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.591 *****Validation***** Loss: 0.1530 | Classifier Loss: 0.0806 | Box Reg Loss: 0.0563 | RPN Box Reg Loss: 0.0057 | Objectness Loss: 0.0105 creating index... index created! Test: [ 0/10] eta: 0:01:24 model_time: 0.8134 (0.8134) evaluator_time: 0.0100 (0.0100) time: 8.4107 data: 7.2755 max mem: 11910 Test: [ 9/10] eta: 0:00:01 model_time: 0.8134 (0.7419) evaluator_time: 0.0097 (0.0094) time: 1.5489 data: 0.7278 max mem: 11910 Test: Total time: 0:00:15 (1.5641 s / it) Averaged stats: model_time: 0.8134 (0.7419) evaluator_time: 0.0097 (0.0094) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.384 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.215 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.211 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.528 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.539 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.539 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.539
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 10/20 | LR: 0.005000 *****Training***** Loss: 0.1250 | Classifier Loss: 0.0621 | Box Reg Loss: 0.0553 | RPN Box Reg Loss: 0.0045 | Objectness Loss: 0.0032 creating index... index created! Test: [ 0/35] eta: 0:13:37 model_time: 1.0504 (1.0504) evaluator_time: 0.1036 (0.1036) time: 23.3608 data: 22.1311 max mem: 11910 Test: [34/35] eta: 0:00:02 model_time: 0.8506 (0.8722) evaluator_time: 0.0295 (0.0446) time: 1.8384 data: 0.9169 max mem: 11910 Test: Total time: 0:01:26 (2.4656 s / it) Averaged stats: model_time: 0.8506 (0.8722) evaluator_time: 0.0295 (0.0446) Accumulating evaluation results... DONE (t=0.11s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.301 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.532 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.317 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.301 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.625 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.632 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.632 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.632 *****Validation***** Loss: 0.1569 | Classifier Loss: 0.0792 | Box Reg Loss: 0.0645 | RPN Box Reg Loss: 0.0052 | Objectness Loss: 0.0079 creating index... index created! Test: [ 0/10] eta: 0:01:18 model_time: 0.8277 (0.8277) evaluator_time: 0.0114 (0.0114) time: 7.8195 data: 6.9283 max mem: 11910 Test: [ 9/10] eta: 0:00:01 model_time: 0.8277 (0.7447) evaluator_time: 0.0089 (0.0090) time: 1.4910 data: 0.6931 max mem: 11910 Test: Total time: 0:00:15 (1.5050 s / it) Averaged stats: model_time: 0.8277 (0.7447) evaluator_time: 0.0089 (0.0090) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.204 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.404 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.168 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.204 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.486 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.487 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.487 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.487
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 11/20 | LR: 0.005000 *****Training***** Loss: 0.1181 | Classifier Loss: 0.0586 | Box Reg Loss: 0.0527 | RPN Box Reg Loss: 0.0043 | Objectness Loss: 0.0025 creating index... index created! Test: [ 0/35] eta: 0:13:00 model_time: 0.9037 (0.9037) evaluator_time: 0.0781 (0.0781) time: 22.3021 data: 21.2465 max mem: 11910 Test: [34/35] eta: 0:00:02 model_time: 0.8390 (0.8677) evaluator_time: 0.0236 (0.0426) time: 1.6776 data: 0.7740 max mem: 11910 Test: Total time: 0:01:23 (2.3874 s / it) Averaged stats: model_time: 0.8390 (0.8677) evaluator_time: 0.0236 (0.0426) Accumulating evaluation results... DONE (t=0.15s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.306 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.572 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.291 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.306 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.624 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.625 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.625 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.625 *****Validation***** Loss: 0.1525 | Classifier Loss: 0.0768 | Box Reg Loss: 0.0635 | RPN Box Reg Loss: 0.0054 | Objectness Loss: 0.0067 creating index... index created! Test: [ 0/10] eta: 0:01:09 model_time: 0.8349 (0.8349) evaluator_time: 0.0161 (0.0161) time: 6.9500 data: 6.0470 max mem: 11910 Test: [ 9/10] eta: 0:00:01 model_time: 0.8269 (0.7469) evaluator_time: 0.0095 (0.0106) time: 1.4332 data: 0.6316 max mem: 11910 Test: Total time: 0:00:14 (1.4482 s / it) Averaged stats: model_time: 0.8269 (0.7469) evaluator_time: 0.0095 (0.0106) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.419 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.151 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.209 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.498 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.512 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.512 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.512
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 12/20 | LR: 0.005000 *****Training***** Loss: 0.1140 | Classifier Loss: 0.0570 | Box Reg Loss: 0.0508 | RPN Box Reg Loss: 0.0041 | Objectness Loss: 0.0020 creating index... index created! Test: [ 0/35] eta: 0:15:15 model_time: 0.9508 (0.9508) evaluator_time: 0.0688 (0.0688) time: 26.1597 data: 25.0782 max mem: 11910 Test: [34/35] eta: 0:00:02 model_time: 0.8500 (0.8622) evaluator_time: 0.0111 (0.0708) time: 1.7274 data: 0.7536 max mem: 11910 Test: Total time: 0:01:25 (2.4337 s / it) Averaged stats: model_time: 0.8500 (0.8622) evaluator_time: 0.0111 (0.0708) Accumulating evaluation results... DONE (t=0.11s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.602 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.395 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.653 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.659 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.659 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.659 *****Validation***** Loss: 0.1507 | Classifier Loss: 0.0757 | Box Reg Loss: 0.0592 | RPN Box Reg Loss: 0.0055 | Objectness Loss: 0.0103 creating index... index created! Test: [ 0/10] eta: 0:01:09 model_time: 0.8075 (0.8075) evaluator_time: 0.0151 (0.0151) time: 6.9533 data: 6.0791 max mem: 11910 Test: [ 9/10] eta: 0:00:01 model_time: 0.8075 (0.7436) evaluator_time: 0.0093 (0.0108) time: 1.4068 data: 0.6087 max mem: 11910 Test: Total time: 0:00:14 (1.4227 s / it) Averaged stats: model_time: 0.8075 (0.7436) evaluator_time: 0.0093 (0.0108) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.224 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.427 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.223 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.224 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.502 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.502 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.502 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.502
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 13/20 | LR: 0.005000 *****Training***** Loss: 0.1085 | Classifier Loss: 0.0542 | Box Reg Loss: 0.0481 | RPN Box Reg Loss: 0.0038 | Objectness Loss: 0.0024 creating index... index created! Test: [ 0/35] eta: 0:15:12 model_time: 1.0926 (1.0926) evaluator_time: 0.1234 (0.1234) time: 26.0840 data: 24.7947 max mem: 11910 Test: [34/35] eta: 0:00:02 model_time: 0.8885 (0.8931) evaluator_time: 0.0157 (0.0373) time: 1.6438 data: 0.7044 max mem: 11910 Test: Total time: 0:01:24 (2.4036 s / it) Averaged stats: model_time: 0.8885 (0.8931) evaluator_time: 0.0157 (0.0373) Accumulating evaluation results... DONE (t=0.12s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.348 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.620 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.338 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.348 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.628 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.631 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.631 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.631 *****Validation***** Loss: 0.1466 | Classifier Loss: 0.0742 | Box Reg Loss: 0.0625 | RPN Box Reg Loss: 0.0050 | Objectness Loss: 0.0049 creating index... index created! Test: [ 0/10] eta: 0:01:19 model_time: 0.8576 (0.8576) evaluator_time: 0.0082 (0.0082) time: 7.9950 data: 7.0835 max mem: 11910 Test: [ 9/10] eta: 0:00:01 model_time: 0.8576 (0.7869) evaluator_time: 0.0100 (0.0092) time: 1.5435 data: 0.7086 max mem: 11910 Test: Total time: 0:00:15 (1.5578 s / it) Averaged stats: model_time: 0.8576 (0.7869) evaluator_time: 0.0100 (0.0092) Accumulating evaluation results... DONE (t=0.04s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.230 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.460 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.238 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.230 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.511 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.522 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.522 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.522
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 14/20 | LR: 0.005000 *****Training***** Loss: 0.1032 | Classifier Loss: 0.0518 | Box Reg Loss: 0.0460 | RPN Box Reg Loss: 0.0038 | Objectness Loss: 0.0016 creating index... index created! Test: [ 0/35] eta: 0:12:40 model_time: 0.9979 (0.9979) evaluator_time: 0.1076 (0.1076) time: 21.7353 data: 20.5554 max mem: 11935 Test: [34/35] eta: 0:00:02 model_time: 0.9053 (0.9198) evaluator_time: 0.0219 (0.0425) time: 1.8436 data: 0.8738 max mem: 11935 Test: Total time: 0:01:24 (2.4157 s / it) Averaged stats: model_time: 0.9053 (0.9198) evaluator_time: 0.0219 (0.0425) Accumulating evaluation results... DONE (t=0.11s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.411 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.680 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.453 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.411 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.674 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.675 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.675 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.675 *****Validation***** Loss: 0.1390 | Classifier Loss: 0.0711 | Box Reg Loss: 0.0583 | RPN Box Reg Loss: 0.0048 | Objectness Loss: 0.0049 creating index... index created! Test: [ 0/10] eta: 0:01:22 model_time: 0.8582 (0.8582) evaluator_time: 0.0121 (0.0121) time: 8.2497 data: 7.3334 max mem: 11935 Test: [ 9/10] eta: 0:00:01 model_time: 0.8582 (0.7918) evaluator_time: 0.0094 (0.0101) time: 1.5744 data: 0.7336 max mem: 11935 Test: Total time: 0:00:15 (1.5899 s / it) Averaged stats: model_time: 0.8582 (0.7918) evaluator_time: 0.0094 (0.0101) Accumulating evaluation results... DONE (t=0.04s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.222 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.426 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.219 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.223 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.521 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.528 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.528 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.528
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 15/20 | LR: 0.005000 *****Training***** Loss: 0.1065 | Classifier Loss: 0.0527 | Box Reg Loss: 0.0477 | RPN Box Reg Loss: 0.0036 | Objectness Loss: 0.0025 creating index... index created! Test: [ 0/35] eta: 0:12:34 model_time: 1.0856 (1.0856) evaluator_time: 0.0924 (0.0924) time: 21.5442 data: 20.2958 max mem: 11935 Test: [34/35] eta: 0:00:02 model_time: 0.8600 (0.8772) evaluator_time: 0.0233 (0.0744) time: 1.7745 data: 0.7862 max mem: 11935 Test: Total time: 0:01:23 (2.3746 s / it) Averaged stats: model_time: 0.8600 (0.8772) evaluator_time: 0.0233 (0.0744) Accumulating evaluation results... DONE (t=0.12s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.413 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.669 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.456 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.413 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.663 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.667 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.667 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.667 *****Validation***** Loss: 0.1675 | Classifier Loss: 0.0836 | Box Reg Loss: 0.0679 | RPN Box Reg Loss: 0.0052 | Objectness Loss: 0.0108 creating index... index created! Test: [ 0/10] eta: 0:01:22 model_time: 0.8361 (0.8361) evaluator_time: 0.0121 (0.0121) time: 8.2017 data: 7.3020 max mem: 11935 Test: [ 9/10] eta: 0:00:01 model_time: 0.8317 (0.7503) evaluator_time: 0.0094 (0.0095) time: 1.5335 data: 0.7305 max mem: 11935 Test: Total time: 0:00:15 (1.5490 s / it) Averaged stats: model_time: 0.8317 (0.7503) evaluator_time: 0.0094 (0.0095) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.199 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.400 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.136 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.199 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.503 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.505 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.505 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.505
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 16/20 | LR: 0.005000 *****Training***** Loss: 0.1013 | Classifier Loss: 0.0505 | Box Reg Loss: 0.0449 | RPN Box Reg Loss: 0.0037 | Objectness Loss: 0.0022 creating index... index created! Test: [ 0/35] eta: 0:13:13 model_time: 0.9807 (0.9807) evaluator_time: 0.0670 (0.0670) time: 22.6845 data: 21.5696 max mem: 12608 Test: [34/35] eta: 0:00:02 model_time: 0.8481 (0.8720) evaluator_time: 0.0321 (0.0427) time: 1.7763 data: 0.8384 max mem: 12608 Test: Total time: 0:01:25 (2.4367 s / it) Averaged stats: model_time: 0.8481 (0.8720) evaluator_time: 0.0321 (0.0427) Accumulating evaluation results... DONE (t=0.11s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.403 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.673 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.427 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.403 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.663 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.664 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.664 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.664 *****Validation***** Loss: 0.1592 | Classifier Loss: 0.0794 | Box Reg Loss: 0.0674 | RPN Box Reg Loss: 0.0053 | Objectness Loss: 0.0070 creating index... index created! Test: [ 0/10] eta: 0:01:13 model_time: 0.8589 (0.8589) evaluator_time: 0.0245 (0.0245) time: 7.3115 data: 6.3751 max mem: 12608 Test: [ 9/10] eta: 0:00:01 model_time: 0.8260 (0.7481) evaluator_time: 0.0085 (0.0100) time: 1.4504 data: 0.6487 max mem: 12608 Test: Total time: 0:00:14 (1.4651 s / it) Averaged stats: model_time: 0.8260 (0.7481) evaluator_time: 0.0085 (0.0100) Accumulating evaluation results... DONE (t=0.04s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.437 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.209 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.484
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 17/20 | LR: 0.005000 *****Training***** Loss: 0.1040 | Classifier Loss: 0.0510 | Box Reg Loss: 0.0472 | RPN Box Reg Loss: 0.0035 | Objectness Loss: 0.0024 creating index... index created! Test: [ 0/35] eta: 0:12:20 model_time: 1.1003 (1.1003) evaluator_time: 0.1200 (0.1200) time: 21.1436 data: 19.8269 max mem: 12608 Test: [34/35] eta: 0:00:02 model_time: 0.8392 (0.8787) evaluator_time: 0.0192 (0.0408) time: 1.7155 data: 0.7968 max mem: 12608 Test: Total time: 0:01:23 (2.3930 s / it) Averaged stats: model_time: 0.8392 (0.8787) evaluator_time: 0.0192 (0.0408) Accumulating evaluation results... DONE (t=0.11s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.428 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.727 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.439 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.428 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.658 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.661 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.661 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.661 *****Validation***** Loss: 0.1449 | Classifier Loss: 0.0751 | Box Reg Loss: 0.0600 | RPN Box Reg Loss: 0.0047 | Objectness Loss: 0.0050 creating index... index created! Test: [ 0/10] eta: 0:01:19 model_time: 0.8102 (0.8102) evaluator_time: 0.0111 (0.0111) time: 7.9180 data: 7.0444 max mem: 12608 Test: [ 9/10] eta: 0:00:01 model_time: 0.8102 (0.7457) evaluator_time: 0.0111 (0.0105) time: 1.5048 data: 0.7048 max mem: 12608 Test: Total time: 0:00:15 (1.5212 s / it) Averaged stats: model_time: 0.8102 (0.7457) evaluator_time: 0.0111 (0.0105) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.236 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.437 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.255 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.236 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.517 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.525 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.525 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.525
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 18/20 | LR: 0.005000 *****Training***** Loss: 0.0961 | Classifier Loss: 0.0470 | Box Reg Loss: 0.0441 | RPN Box Reg Loss: 0.0034 | Objectness Loss: 0.0016 creating index... index created! Test: [ 0/35] eta: 0:15:24 model_time: 0.9794 (0.9794) evaluator_time: 0.0493 (0.0493) time: 26.4045 data: 25.3046 max mem: 12608 Test: [34/35] eta: 0:00:02 model_time: 0.8548 (0.8962) evaluator_time: 0.0168 (0.0378) time: 1.7762 data: 0.8078 max mem: 12608 Test: Total time: 0:01:24 (2.4003 s / it) Averaged stats: model_time: 0.8548 (0.8962) evaluator_time: 0.0168 (0.0378) Accumulating evaluation results... DONE (t=0.11s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.447 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.733 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.486 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.447 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.677 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.683 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.683 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.683 *****Validation***** Loss: 0.1505 | Classifier Loss: 0.0789 | Box Reg Loss: 0.0591 | RPN Box Reg Loss: 0.0053 | Objectness Loss: 0.0073 creating index... index created! Test: [ 0/10] eta: 0:01:19 model_time: 0.8210 (0.8210) evaluator_time: 0.0114 (0.0114) time: 7.9991 data: 7.1155 max mem: 12608 Test: [ 9/10] eta: 0:00:01 model_time: 0.8210 (0.7472) evaluator_time: 0.0114 (0.0111) time: 1.5142 data: 0.7119 max mem: 12608 Test: Total time: 0:00:15 (1.5299 s / it) Averaged stats: model_time: 0.8210 (0.7472) evaluator_time: 0.0114 (0.0111) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.223 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.417 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.236 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.224 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.528 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.528 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.528 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.528
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 19/20 | LR: 0.005000 *****Training***** Loss: 0.0918 | Classifier Loss: 0.0450 | Box Reg Loss: 0.0416 | RPN Box Reg Loss: 0.0036 | Objectness Loss: 0.0016 creating index... index created! Test: [ 0/35] eta: 0:15:03 model_time: 1.0055 (1.0055) evaluator_time: 0.0835 (0.0835) time: 25.8053 data: 24.6422 max mem: 12608 Test: [34/35] eta: 0:00:02 model_time: 0.8530 (0.8704) evaluator_time: 0.0167 (0.0369) time: 1.7917 data: 0.8828 max mem: 12608 Test: Total time: 0:01:23 (2.3900 s / it) Averaged stats: model_time: 0.8530 (0.8704) evaluator_time: 0.0167 (0.0369) Accumulating evaluation results... DONE (t=0.10s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.467 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.754 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.514 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.467 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.684 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.686 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.686 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.686 *****Validation***** Loss: 0.1526 | Classifier Loss: 0.0768 | Box Reg Loss: 0.0622 | RPN Box Reg Loss: 0.0051 | Objectness Loss: 0.0085 creating index... index created! Test: [ 0/10] eta: 0:01:18 model_time: 0.8306 (0.8306) evaluator_time: 0.0088 (0.0088) time: 7.8161 data: 6.9243 max mem: 12608 Test: [ 9/10] eta: 0:00:01 model_time: 0.8247 (0.7454) evaluator_time: 0.0092 (0.0090) time: 1.4909 data: 0.6927 max mem: 12608 Test: Total time: 0:00:15 (1.5059 s / it) Averaged stats: model_time: 0.8247 (0.7454) evaluator_time: 0.0092 (0.0090) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.234 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.477 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.200 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.234 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.488 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.491 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.491 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.491
0%| | 0/35 [00:00<?, ?it/s]
0%| | 0/10 [00:00<?, ?it/s]
Epoch: 20/20 | LR: 0.005000 *****Training***** Loss: 0.0877 | Classifier Loss: 0.0417 | Box Reg Loss: 0.0408 | RPN Box Reg Loss: 0.0035 | Objectness Loss: 0.0017 creating index... index created! Test: [ 0/35] eta: 0:12:36 model_time: 1.0142 (1.0142) evaluator_time: 0.0905 (0.0905) time: 21.6208 data: 20.4427 max mem: 12608 Test: [34/35] eta: 0:00:02 model_time: 0.8369 (0.8634) evaluator_time: 0.0133 (0.0386) time: 1.6940 data: 0.7888 max mem: 12608 Test: Total time: 0:01:24 (2.4075 s / it) Averaged stats: model_time: 0.8369 (0.8634) evaluator_time: 0.0133 (0.0386) Accumulating evaluation results... DONE (t=0.10s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.482 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.774 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.559 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.482 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.685 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.689 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.689 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.689 *****Validation***** Loss: 0.1634 | Classifier Loss: 0.0805 | Box Reg Loss: 0.0687 | RPN Box Reg Loss: 0.0048 | Objectness Loss: 0.0093 creating index... index created! Test: [ 0/10] eta: 0:01:21 model_time: 0.8079 (0.8079) evaluator_time: 0.0096 (0.0096) time: 8.1217 data: 7.2531 max mem: 12608 Test: [ 9/10] eta: 0:00:01 model_time: 0.8079 (0.7425) evaluator_time: 0.0096 (0.0101) time: 1.5220 data: 0.7256 max mem: 12608 Test: Total time: 0:00:15 (1.5372 s / it) Averaged stats: model_time: 0.8079 (0.7425) evaluator_time: 0.0096 (0.0101) Accumulating evaluation results... DONE (t=0.05s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.216 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.430 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.197 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.216 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.500 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.500 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.500 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.500 Best epoch in 19
import os
import pandas as pd
import torch
from torchvision.transforms import functional as F
from tqdm.notebook import tqdm
from PIL import Image
# å®ē¾©ęø¬č©¦ęøęéé”
class TestDataset:
def __init__(self, root, info_root, transforms=None):
self.root = root
self.transforms = transforms
self.test_info = pd.read_csv(os.path.join(info_root, "test.csv"))
def __len__(self):
return len(self.test_info)
def __getitem__(self, idx):
# ä½æēØ Filename ä¾å®ä½ęä»¶
file_name = self.test_info.iloc[idx]["Filename"]
img_path = os.path.join(self.root, "image", file_name.replace(".dcm", ".jpg")) # 確äæå°ęå° .jpg
if not os.path.exists(img_path):
raise FileNotFoundError(f"Image file not found: {img_path}")
image = Image.open(img_path).convert("RGB")
width = self.test_info.iloc[idx]["Width"]
height = self.test_info.iloc[idx]["Height"]
if self.transforms:
image = self.transforms(image)
return self.test_info.iloc[idx]["ID"], image, width, height
# 測試ęØč«å½ęø
def inference_on_test_set(model, test_loader, device):
model.eval()
results = []
with torch.no_grad():
for img_ids, images, widths, heights in tqdm(test_loader):
images = [img.to(device) for img in images]
outputs = model(images)
for img_id, output, img, w, h in zip(img_ids, outputs, images, widths, heights):
w, h = float(w), float(h) # å°åÆ¬åŗ¦åé«åŗ¦č½ēŗęµ®é»ęø
for box, label, score in zip(output["boxes"], output["labels"], output["scores"]):
if score < 0.5: # éęæ¾ä½ē½®äæ”åŗ¦ę”
continue
# éēę”ęøäøåčē
xmin, ymin, xmax, ymax = box.tolist()
xmin, xmax = xmin / w, xmax / w
ymin, ymax = ymin / h, ymax / h
category = config2.categories[label.item() - 1] # å¾é”å„ē“¢å¼č½ęēŗå稱
results.append({
"ID": img_id,
"category": category,
"score": score.item(),
"xmin": xmin,
"xmax": xmax,
"ymin": ymin,
"ymax": ymax
})
return results
# äæåēµęå° CSV
def save_results_to_csv(results, output_path):
df = pd.DataFrame(results)
df.to_csv(output_path, index=False)
print(f"Results saved to {output_path}")
return df
# äø»å½ęø
def main():
# čØē½®ęØ”åčØå
device = config2.device
# åå§å樔å
model = fasterrcnn(num_classes=config2.num_classes)
model.load_state_dict(torch.load(os.path.join(config2.save_root, "final.pth"))["model"])
model.to(device)
# åå§å測試ęøęé
test_dataset = TestDataset(
root=config2.test_root,
info_root=config2.info_root_test,
transforms=F.to_tensor
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=1, # äøę¬”čēäøå¼µå½±å
shuffle=False,
collate_fn=lambda x: tuple(zip(*x))
)
# é²č”ęØč«
print("Running inference on test set...")
results = inference_on_test_set(model, test_loader, device)
# äæåēµę
df = save_results_to_csv(results, os.path.join(config2.save_root, "submission.csv"))
return df
if __name__ == "__main__":
# åØ config2 äøå®ē¾©é”å„å稱
class config2:
test_root = "/kaggle/working/test"
info_root_test = "/kaggle/input/hwk05-data/hwk05_data/test"
save_root = "/kaggle/working"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
num_classes = 8
categories = [
"aortic_curvature",
"aortic_atherosclerosis_calcification",
"cardiac_hypertrophy",
"intercostal_pleural_thickening",
"lung_field_infiltration",
"degenerative_joint_disease_of_the_thoracic",
"scoliosis",
"normal"
]
pred_df=main()
Running inference on test set...
0%| | 0/113 [00:00<?, ?it/s]
Results saved to /kaggle/working/submission.csv
# 锯示å10č”ćå¾10č”åęøęå½¢ē
print("First 10 rows:")
print(pred_df.head(10))
print("\nLast 10 rows:")
print(pred_df.tail(10))
print("\nDataframe shape:")
print(pred_df.shape)
First 10 rows:
ID category \
0 TDR02_20161123_145314 aortic_atherosclerosis_calcification
1 TDR02_20161123_145314 aortic_curvature
2 TDR02_20161123_145314 lung_field_infiltration
3 TDR02_20161123_145314 degenerative_joint_disease_of_the_thoracic
4 TDR01_20171106_111727 degenerative_joint_disease_of_the_thoracic
5 TDR01_20171106_111727 lung_field_infiltration
6 TDR01_20171106_111727 aortic_curvature
7 TDR01_20171106_111727 scoliosis
8 TDR01_20180510_090210 degenerative_joint_disease_of_the_thoracic
9 TDR01_20180511_092549 scoliosis
score xmin xmax ymin ymax
0 0.961782 0.501516 0.664308 0.200134 0.381468
1 0.787029 0.374931 0.675504 0.232649 0.719997
2 0.758716 0.063560 0.959290 0.010014 0.866449
3 0.549290 0.365644 0.673442 0.016905 0.774482
4 0.837700 0.360594 0.624836 0.072813 0.807553
5 0.596343 0.053755 0.940258 0.030877 0.854882
6 0.542351 0.359334 0.641187 0.229484 0.678211
7 0.519861 0.407437 0.616236 0.185018 0.890281
8 0.738120 0.363038 0.617012 0.122861 0.809470
9 0.724835 0.392327 0.619073 0.210029 0.839867
Last 10 rows:
ID category \
266 TDR02_20161125_122319 degenerative_joint_disease_of_the_thoracic
267 TDR02_20161125_122319 scoliosis
268 TDR02_20180123_115426 degenerative_joint_disease_of_the_thoracic
269 TDR02_20180123_115426 lung_field_infiltration
270 TDR01_20180508_173616 scoliosis
271 TDR02_20161118_145330 aortic_atherosclerosis_calcification
272 TDR02_20161118_145330 cardiac_hypertrophy
273 TDR02_20161118_145330 lung_field_infiltration
274 TDR02_20161118_145330 aortic_curvature
275 TDR02_20161118_145330 degenerative_joint_disease_of_the_thoracic
score xmin xmax ymin ymax
266 0.699098 0.381842 0.635729 0.054117 0.807661
267 0.500009 0.405378 0.613517 0.294539 0.858112
268 0.781155 0.365377 0.629589 0.083213 0.827556
269 0.720102 0.055466 0.915525 0.037781 0.866403
270 0.661993 0.388692 0.606562 0.226665 0.805151
271 0.979094 0.419940 0.596462 0.245768 0.422824
272 0.934155 0.297666 0.760472 0.360590 0.693891
273 0.766747 0.055702 0.894140 0.056281 0.816584
274 0.668177 0.273974 0.609571 0.255313 0.668751
275 0.504318 0.283422 0.588073 0.074076 0.742070
Dataframe shape:
(276, 7)
5-2¶
# import libraries
# basic
import warnings
warnings.filterwarnings('ignore')
import os
import random
import numpy as np
import pandas as pd
# visualization
import cv2
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
!pip install grad-cam
import pytorch_grad_cam
from pytorch_grad_cam import EigenCAM, AblationCAM
from pytorch_grad_cam.ablation_layer import AblationLayerFasterRCNN
from pytorch_grad_cam.utils.model_targets import FasterRCNNBoxScoreTarget
from pytorch_grad_cam.utils.reshape_transforms import fasterrcnn_reshape_transform
from pytorch_grad_cam.utils.image import show_cam_on_image
# PyTorch
import torch
import torchvision
from torchvision import models
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.transforms import v2
from torchvision.transforms.v2 import functional as F
Collecting grad-cam
Downloading grad-cam-1.5.4.tar.gz (7.8 MB)
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā 7.8/7.8 MB 58.0 MB/s eta 0:00:0000:0100:01
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from grad-cam) (1.26.4)
Requirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from grad-cam) (10.4.0)
Requirement already satisfied: torch>=1.7.1 in /usr/local/lib/python3.10/dist-packages (from grad-cam) (2.4.1+cu121)
Requirement already satisfied: torchvision>=0.8.2 in /usr/local/lib/python3.10/dist-packages (from grad-cam) (0.19.1+cu121)
Collecting ttach (from grad-cam)
Downloading ttach-0.0.3-py3-none-any.whl.metadata (5.2 kB)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from grad-cam) (4.66.5)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (from grad-cam) (4.10.0.84)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from grad-cam) (3.7.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from grad-cam) (1.2.2)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.7.1->grad-cam) (3.16.1)
Requirement already satisfied: typing-extensions>=4.8.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.7.1->grad-cam) (4.12.2)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.7.1->grad-cam) (1.13.3)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.7.1->grad-cam) (3.3)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.7.1->grad-cam) (3.1.4)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch>=1.7.1->grad-cam) (2024.6.1)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->grad-cam) (1.3.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->grad-cam) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->grad-cam) (4.53.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->grad-cam) (1.4.7)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->grad-cam) (24.1)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->grad-cam) (3.1.4)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->grad-cam) (2.8.2)
Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->grad-cam) (1.13.1)
Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->grad-cam) (1.4.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->grad-cam) (3.5.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib->grad-cam) (1.16.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.7.1->grad-cam) (2.1.5)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.7.1->grad-cam) (1.3.0)
Downloading ttach-0.0.3-py3-none-any.whl (9.8 kB)
Building wheels for collected packages: grad-cam
Building wheel for grad-cam (pyproject.toml) ... done
Created wheel for grad-cam: filename=grad_cam-1.5.4-py3-none-any.whl size=39588 sha256=666f98426457c54f6518315f1acf3e1bf652653fc49f931a0dab41fd1f26a479
Stored in directory: /root/.cache/pip/wheels/50/b0/82/1f97b5348c7fe9f0ce0ba18497202cafa5dec4562bd5292680
Successfully built grad-cam
Installing collected packages: ttach, grad-cam
Successfully installed grad-cam-1.5.4 ttach-0.0.3
class config3:
root = "/kaggle/working/train"
num_classes = 8
categories = ['normal', 'aortic_curvature', 'aortic_atherosclerosis_calcification',
'cardiac_hypertrophy', 'intercostal_pleural_thickening', 'lung_field_infiltration',
'degenerative_joint_disease_of_the_thoracic_spine', 'scoliosis']
seed = 42
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def seed_everything(seed):
# Set Python random seed
random.seed(seed)
# Set NumPy random seed
np.random.seed(seed)
# Set PyTorch random seed for CPU and GPU
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
# Set PyTorch deterministic operations for cudnn backend
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
seed_everything(config3.seed)
def predict(input_tensor, model, device, detection_threshold):
outputs = model(input_tensor)
pred_classes = [config3.categories[i] for i in outputs[0]['labels'].cpu().numpy()]
pred_labels = outputs[0]['labels'].cpu().numpy()
pred_scores = outputs[0]['scores'].detach().cpu().numpy()
pred_bboxes = outputs[0]['boxes'].detach().cpu().numpy()
boxes, classes, labels, indices,scores = [], [], [], [], []
for index in range(len(pred_scores)):
if pred_scores[index] >= detection_threshold:
boxes.append(pred_bboxes[index].astype(np.int32))
classes.append(pred_classes[index])
labels.append(pred_labels[index])
indices.append(index)
scores.append(pred_scores[index])
boxes = np.int32(boxes)
return boxes, classes, labels, indices, scores
COLORS = np.random.uniform(0, 255, size=(len(config3.categories), 3))
def draw_boxes(boxes, labels, classes, image):
for i, box in enumerate(boxes):
# Convert RGB to BGR for OpenCV
color = COLORS[labels[i]].astype(int)[::-1]
# Draw the bounding box
cv2.rectangle(
image,
(int(box[0]), int(box[1])),
(int(box[2]), int(box[3])),
color.tolist(), 8
)
# Increase font size and thickness for label
font_scale = 4 # Increase this value for larger font
thickness = 10 # Increase thickness for better visibility
# Add class label as text
cv2.putText(image, classes[i],
(int(box[0]), int(box[1]) - 10), # Adjust text position
cv2.FONT_HERSHEY_SIMPLEX,
font_scale,
color.tolist(),
thickness,
lineType=cv2.LINE_AA)
return image
def get_transform():
transform = v2.Compose(
[
v2.ToImage(), ## Used while using PIL image
#v2.ConvertBoundingBoxFormat(tv_tensors.BoundingBoxFormat.XYXY),
v2.ToDtype(torch.float32, scale=True),
])
return transform
def plot_eigen_cam_images(transforms, model, cat, threshold):
rows, cols = 4, 2
fig = plt.figure(figsize=(10, 20)) # Adjust figure size
grid = plt.GridSpec(rows, cols)
best_ckpt = torch.load("/kaggle/working/final.pth", map_location=config3.device)
model.load_state_dict(best_ckpt["model"])
model.eval().to(config3.device)
target_layers = [model.backbone]
cam = EigenCAM(model,
target_layers,
reshape_transform=fasterrcnn_reshape_transform)
for i in range(rows * cols):
all_images = os.listdir(os.path.join(config3.root, config3.categories[i]))
image_path = os.path.join(config3.root, config3.categories[i], all_images[0])
image = Image.open(image_path).convert("RGB")
input_tensor = transforms(image)
input_tensor = input_tensor.to(config3.device)
input_tensor = input_tensor.unsqueeze(0)
image = np.array(image)
image_float_np = np.float32(image) / 255
boxes, classes, labels, indices, scores = predict(input_tensor, model, config3.device, threshold)
image = draw_boxes(boxes, labels, classes, image)
targets = [FasterRCNNBoxScoreTarget(labels=labels, bounding_boxes=boxes)]
grayscale_cam = cam(input_tensor, targets=targets)
grayscale_cam = grayscale_cam[0, :]
cam_image = show_cam_on_image(image_float_np, grayscale_cam, use_rgb=True)
image_with_bounding_boxes = draw_boxes(boxes, labels, classes, cam_image)
categories = fig.add_subplot(grid[i])
categories.set_axis_off()
gs = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=grid[i])
ax = fig.add_subplot(gs[0])
ax.imshow(image_with_bounding_boxes)
ax.set_title(f"{config3.categories[i]}")
ax.axis("off")
fig.patch.set_facecolor('white')
fig.suptitle("EigenCAM Images of 8 categories\n", fontweight='bold', size=16)
fig.tight_layout()
fig.subplots_adjust(wspace=0.2, hspace=0.4) # Add extra space between plots
def plot_ablation_cam_images(transforms, model):
rows, cols = 4, 2
fig = plt.figure(figsize = (10, 20))
grid = plt.GridSpec(rows, cols)
best_ckpt = torch.load("/kaggle/working/final.pth", map_location = config3.device)
model.load_state_dict(best_ckpt["model"])
model.eval().to(config3.device)
target_layers = [model.backbone]
cam = AblationCAM(model,
target_layers,
reshape_transform = fasterrcnn_reshape_transform,
ablation_layer = AblationLayerFasterRCNN(),
ratio_channels_to_ablate = 1.0)
for i in range(rows * cols):
all_images = os.listdir(os.path.join(config3.root, config3.categories[i]))
image_path = os.path.join(config3.root, config3.categories[i], all_images[0])
image = Image.open(image_path).convert("RGB")
input_tensor = transforms(image)
input_tensor = input_tensor.to(config3.device)
input_tensor = input_tensor.unsqueeze(0)
image = np.array(image)
image_float_np = np.float32(image) / 255
boxes, classes, labels, indices, scores = predict(input_tensor, model, config3.device, 0)
image = draw_boxes(boxes, labels, classes, image)
targets = [FasterRCNNBoxScoreTarget(labels = labels, bounding_boxes = boxes)]
grayscale_cam = cam(input_tensor, targets = targets)
grayscale_cam = grayscale_cam[0, :]
cam_image = show_cam_on_image(image_float_np, grayscale_cam, use_rgb = True)
image_with_bounding_boxes = draw_boxes(boxes, labels, classes, cam_image)
categories = fig.add_subplot(grid[i])
categories.set_axis_off()
gs = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec = grid[i])
ax = fig.add_subplot(gs[0])
ax.imshow(image_with_bounding_boxes)
ax.set_title(f"{config3.categories[i]}")
ax.axis("off")
fig.patch.set_facecolor('white')
fig.suptitle("AblationCAM Images of 8 categories\n", fontweight = 'bold', size = 16)
fig.tight_layout()
result = plot_eigen_cam_images(transforms = get_transform(), model = fasterrcnn(config3.num_classes), cat=0, threshold=0)
plot_ablation_cam_images(transforms = get_transform(), model = fasterrcnn(config3.num_classes))
100%|āāāāāāāāāā| 40/40 [00:44<00:00, 1.12s/it] 100%|āāāāāāāāāā| 40/40 [00:49<00:00, 1.23s/it] 100%|āāāāāāāāāā| 40/40 [00:52<00:00, 1.32s/it] 100%|āāāāāāāāāā| 40/40 [00:52<00:00, 1.31s/it] 100%|āāāāāāāāāā| 40/40 [00:55<00:00, 1.40s/it] 100%|āāāāāāāāāā| 40/40 [00:39<00:00, 1.00it/s] 100%|āāāāāāāāāā| 40/40 [00:40<00:00, 1.01s/it] 100%|āāāāāāāāāā| 40/40 [00:54<00:00, 1.37s/it]
!zip -r file.zip /kaggle/working
adding: kaggle/working/ (stored 0%) adding: kaggle/working/final.pth (deflated 7%) adding: kaggle/working/coco_utils.py (deflated 71%) adding: kaggle/working/train.json (deflated 83%) adding: kaggle/working/engine.py (deflated 66%) adding: kaggle/working/transforms.py (deflated 76%) adding: kaggle/working/__pycache__/ (stored 0%) adding: kaggle/working/__pycache__/engine.cpython-310.pyc (deflated 44%) adding: kaggle/working/__pycache__/transforms.cpython-310.pyc (deflated 54%) adding: kaggle/working/__pycache__/coco_eval.cpython-310.pyc (deflated 51%) adding: kaggle/working/__pycache__/coco_utils.cpython-310.pyc (deflated 53%) adding: kaggle/working/__pycache__/utils.cpython-310.pyc (deflated 50%) adding: kaggle/working/train/ (stored 0%) adding: kaggle/working/train/aortic_curvature/ (stored 0%) adding: kaggle/working/train/aortic_curvature/FILE0.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/220_5.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/220_4.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/FILE0_1.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/220.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/220_1.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/4440_14.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/220_6.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/220_0.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/FILE0_7.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/Volume0_5.jpg (deflated 9%) adding: kaggle/working/train/aortic_curvature/Volume0_1.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/10_0.jpg (deflated 4%) adding: kaggle/working/train/aortic_curvature/10_7.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/4440_9.jpg (deflated 9%) adding: kaggle/working/train/aortic_curvature/FILE0_2.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/4440_1.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/220_2.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/FILE0_3.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/FILE0_6.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/10_6.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/220_7.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/4440_15.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/10_2.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/4440_7.jpg (deflated 9%) adding: kaggle/working/train/aortic_curvature/10_10.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/FILE0_4.jpg (deflated 10%) adding: kaggle/working/train/aortic_curvature/10_4.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/10_15.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/10_3.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/Volume0_3.jpg (deflated 4%) adding: kaggle/working/train/aortic_curvature/10_12.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/220_14.jpg (deflated 3%) adding: kaggle/working/train/aortic_curvature/10_13.jpg (deflated 4%) adding: kaggle/working/train/aortic_curvature/4440_5.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/4440.jpg (deflated 3%) adding: kaggle/working/train/aortic_curvature/FILE0_0.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/220_11.jpg (deflated 3%) adding: kaggle/working/train/aortic_curvature/220_10.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/10_5.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/10_8.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/10_11.jpg (deflated 4%) adding: kaggle/working/train/aortic_curvature/Volume0.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/FILE0_5.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/220_12.jpg (deflated 3%) adding: kaggle/working/train/aortic_curvature/4440_6.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/Volume0_4.jpg (deflated 7%) adding: kaggle/working/train/aortic_curvature/4440_12.jpg (deflated 8%) adding: kaggle/working/train/aortic_curvature/220_9.jpg (deflated 4%) adding: kaggle/working/train/aortic_curvature/10.jpg (deflated 6%) adding: kaggle/working/train/aortic_curvature/10_1.jpg (deflated 5%) adding: kaggle/working/train/aortic_curvature/4440_8.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/ (stored 0%) adding: kaggle/working/train/lung_field_infiltration/FILE0.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/Volume0_6.jpg (deflated 4%) adding: kaggle/working/train/lung_field_infiltration/Volume0_0.jpg (deflated 11%) adding: kaggle/working/train/lung_field_infiltration/Volume0_7.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/FILE0_10.jpg (deflated 9%) adding: kaggle/working/train/lung_field_infiltration/10_0 (2).jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/FILE0_1.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/FILE0_13.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/220_1.jpg (deflated 3%) adding: kaggle/working/train/lung_field_infiltration/220_0.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/FILE0_7.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/FILE0_16.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/Volume0_1.jpg (deflated 12%) adding: kaggle/working/train/lung_field_infiltration/10_7 (2).jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/10_0.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/10_7.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/FILE0_2.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/4440_1.jpg (deflated 3%) adding: kaggle/working/train/lung_field_infiltration/220_2.jpg (deflated 3%) adding: kaggle/working/train/lung_field_infiltration/10_9.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/FILE0_11.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/10_3 (2).jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/10_1 (2).jpg (deflated 4%) adding: kaggle/working/train/lung_field_infiltration/FILE0_12.jpg (deflated 10%) adding: kaggle/working/train/lung_field_infiltration/FILE0_3.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/FILE0_23.jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/FILE0_17.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/Volume0_2.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/FILE0_6.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/10_6.jpg (deflated 4%) adding: kaggle/working/train/lung_field_infiltration/10_6 (2).jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/10_2.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/4440_0.jpg (deflated 3%) adding: kaggle/working/train/lung_field_infiltration/10_10.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/Volume0_13.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/10_4.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/10_3.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/Volume0_3.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/10_12.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/10_13.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/Volume0_8.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/4440_5.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/Volume0_16.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/10 (2).jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/FILE0_0.jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/10_4 (2).jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/FILE0_18.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/10_2 (2).jpg (deflated 4%) adding: kaggle/working/train/lung_field_infiltration/10_5.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/10_8.jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/10_11.jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/FILE0_5.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/220_3.jpg (deflated 3%) adding: kaggle/working/train/lung_field_infiltration/Volume0_9.jpg (deflated 7%) adding: kaggle/working/train/lung_field_infiltration/FILE0_21.jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/FILE0_20.jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/Volume0_15.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/10_11 (2).jpg (deflated 5%) adding: kaggle/working/train/lung_field_infiltration/FILE0_8.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/Volume0_12.jpg (deflated 11%) adding: kaggle/working/train/lung_field_infiltration/FILE0_9.jpg (deflated 10%) adding: kaggle/working/train/lung_field_infiltration/Volume0_4.jpg (deflated 8%) adding: kaggle/working/train/lung_field_infiltration/FILE0_19.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/10_8 (2).jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/4440_4.jpg (deflated 9%) adding: kaggle/working/train/lung_field_infiltration/10.jpg (deflated 6%) adding: kaggle/working/train/lung_field_infiltration/10_1.jpg (deflated 4%) adding: kaggle/working/train/lung_field_infiltration/10_5 (2).jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/ (stored 0%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_9a.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_6a.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440b.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0a.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_8a.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_0b.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_5a.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_5d.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_1a.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_8c.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_0d.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_2a.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0c.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_3a.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0_7c.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_1d.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_2a.jpg (deflated 9%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10a.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0c.jpg (deflated 11%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_1b.jpg (deflated 9%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0_0c.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_4d.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220_0b.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220b.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220_0a.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_0 (2)d.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_12a.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_3b.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_0d.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220_2a.jpg (deflated 3%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0b.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10b.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_7a.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220_5a.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_4a.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/A0_0d.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_1a.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440_0a.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220a.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0_6c.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_10a.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_0c.jpg (deflated 10%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_7c.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_1b.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0_11c.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0b.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10d.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_0a.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_3a.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220 (2)d.jpg (deflated 3%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_1 (2)d.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440d.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0d.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220_3a.jpg (deflated 7%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_5c.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220_4a.jpg (deflated 3%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0_5c.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_0b.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/A0d.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_3d.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_1b.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0d.jpg (deflated 11%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/220_1b.jpg (deflated 8%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/Volume0_2c.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_2d.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_0b.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10_11a.jpg (deflated 5%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_3c.jpg (deflated 9%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/10 (2)d.jpg (deflated 6%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/4440a.jpg (deflated 4%) adding: kaggle/working/train/aortic_atherosclerosis_calcification/FILE0_4c.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/ (stored 0%) adding: kaggle/working/train/cardiac_hypertrophy/FILE0.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/Volume0_0.jpg (deflated 11%) adding: kaggle/working/train/cardiac_hypertrophy/10_0 (2).jpg (deflated 5%) adding: kaggle/working/train/cardiac_hypertrophy/220 (2).jpg (deflated 7%) adding: kaggle/working/train/cardiac_hypertrophy/FILE0_1.jpg (deflated 7%) adding: kaggle/working/train/cardiac_hypertrophy/220.jpg (deflated 4%) adding: kaggle/working/train/cardiac_hypertrophy/A0_3.jpg (deflated 5%) adding: kaggle/working/train/cardiac_hypertrophy/Volume0_1.jpg (deflated 8%) adding: kaggle/working/train/cardiac_hypertrophy/10_0.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/10_7.jpg (deflated 7%) adding: kaggle/working/train/cardiac_hypertrophy/FILE0_2.jpg (deflated 9%) adding: kaggle/working/train/cardiac_hypertrophy/10_9.jpg (deflated 5%) adding: kaggle/working/train/cardiac_hypertrophy/10_3 (2).jpg (deflated 5%) adding: kaggle/working/train/cardiac_hypertrophy/10_1 (2).jpg (deflated 7%) adding: kaggle/working/train/cardiac_hypertrophy/Volume0_2.jpg (deflated 4%) adding: kaggle/working/train/cardiac_hypertrophy/FILE0_6.jpg (deflated 7%) adding: kaggle/working/train/cardiac_hypertrophy/10_6.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/10_2.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/10_4.jpg (deflated 4%) adding: kaggle/working/train/cardiac_hypertrophy/A0_4.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/10_12.jpg (deflated 5%) adding: kaggle/working/train/cardiac_hypertrophy/A0_0.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/10_13.jpg (deflated 4%) adding: kaggle/working/train/cardiac_hypertrophy/4440.jpg (deflated 4%) adding: kaggle/working/train/cardiac_hypertrophy/10 (2).jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/FILE0_0.jpg (deflated 7%) adding: kaggle/working/train/cardiac_hypertrophy/A0_2.jpg (deflated 5%) adding: kaggle/working/train/cardiac_hypertrophy/10_5.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/10_8.jpg (deflated 4%) adding: kaggle/working/train/cardiac_hypertrophy/Volume0.jpg (deflated 6%) adding: kaggle/working/train/cardiac_hypertrophy/FILE0_5.jpg (deflated 7%) adding: kaggle/working/train/cardiac_hypertrophy/10.jpg (deflated 4%) adding: kaggle/working/train/cardiac_hypertrophy/10_1.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/ (stored 0%) adding: kaggle/working/train/intercostal_pleural_thickening/220_5.jpg (deflated 3%) adding: kaggle/working/train/intercostal_pleural_thickening/220_4.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/10_0 (2).jpg (deflated 3%) adding: kaggle/working/train/intercostal_pleural_thickening/220 (2).jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/220.jpg (deflated 7%) adding: kaggle/working/train/intercostal_pleural_thickening/220_1.jpg (deflated 7%) adding: kaggle/working/train/intercostal_pleural_thickening/220_6.jpg (deflated 3%) adding: kaggle/working/train/intercostal_pleural_thickening/A0_3.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/10_0.jpg (deflated 5%) adding: kaggle/working/train/intercostal_pleural_thickening/4440_1.jpg (deflated 9%) adding: kaggle/working/train/intercostal_pleural_thickening/10_3 (2).jpg (deflated 3%) adding: kaggle/working/train/intercostal_pleural_thickening/10_1 (2).jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/4440_2.jpg (deflated 7%) adding: kaggle/working/train/intercostal_pleural_thickening/10_2.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/4440_0.jpg (deflated 7%) adding: kaggle/working/train/intercostal_pleural_thickening/10_4.jpg (deflated 5%) adding: kaggle/working/train/intercostal_pleural_thickening/A0_5.jpg (deflated 6%) adding: kaggle/working/train/intercostal_pleural_thickening/10_3.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/A0_1.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/A0_0.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/4440.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/10 (2).jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/10_2 (2).jpg (deflated 5%) adding: kaggle/working/train/intercostal_pleural_thickening/A0_2.jpg (deflated 6%) adding: kaggle/working/train/intercostal_pleural_thickening/220_3.jpg (deflated 3%) adding: kaggle/working/train/intercostal_pleural_thickening/A0_6.jpg (deflated 6%) adding: kaggle/working/train/intercostal_pleural_thickening/A0.jpg (deflated 4%) adding: kaggle/working/train/intercostal_pleural_thickening/4440_4.jpg (deflated 7%) adding: kaggle/working/train/intercostal_pleural_thickening/10_1.jpg (deflated 7%) adding: kaggle/working/train/normal/ (stored 0%) adding: kaggle/working/train/normal/220_83.jpg (deflated 4%) adding: kaggle/working/train/normal/220_46.jpg (deflated 4%) adding: kaggle/working/train/normal/220_93.jpg (deflated 5%) adding: kaggle/working/train/normal/220_69.jpg (deflated 4%) adding: kaggle/working/train/normal/220_86.jpg (deflated 4%) adding: kaggle/working/train/normal/220.jpg (deflated 4%) adding: kaggle/working/train/normal/220_55.jpg (deflated 4%) adding: kaggle/working/train/normal/220_52.jpg (deflated 4%) adding: kaggle/working/train/normal/220_1.jpg (deflated 3%) adding: kaggle/working/train/normal/220_88.jpg (deflated 4%) adding: kaggle/working/train/normal/220_85.jpg (deflated 4%) adding: kaggle/working/train/normal/220_98.jpg (deflated 7%) adding: kaggle/working/train/normal/220_6.jpg (deflated 3%) adding: kaggle/working/train/normal/220_13.jpg (deflated 3%) adding: kaggle/working/train/normal/220_58.jpg (deflated 5%) adding: kaggle/working/train/normal/220_43.jpg (deflated 3%) adding: kaggle/working/train/normal/220_36.jpg (deflated 4%) adding: kaggle/working/train/normal/220_16.jpg (deflated 3%) adding: kaggle/working/train/normal/220_63.jpg (deflated 4%) adding: kaggle/working/train/normal/220_23.jpg (deflated 3%) adding: kaggle/working/train/normal/220_2.jpg (deflated 3%) adding: kaggle/working/train/normal/220_22.jpg (deflated 4%) adding: kaggle/working/train/normal/220_59.jpg (deflated 4%) adding: kaggle/working/train/normal/220_91.jpg (deflated 4%) adding: kaggle/working/train/normal/220_97.jpg (deflated 4%) adding: kaggle/working/train/normal/220_77.jpg (deflated 4%) adding: kaggle/working/train/normal/220_7.jpg (deflated 3%) adding: kaggle/working/train/normal/220_31.jpg (deflated 3%) adding: kaggle/working/train/normal/220_54.jpg (deflated 4%) adding: kaggle/working/train/normal/220_45.jpg (deflated 4%) adding: kaggle/working/train/normal/220_75.jpg (deflated 4%) adding: kaggle/working/train/normal/220_37.jpg (deflated 3%) adding: kaggle/working/train/normal/220_25.jpg (deflated 3%) adding: kaggle/working/train/normal/220_74.jpg (deflated 5%) adding: kaggle/working/train/normal/220_64.jpg (deflated 4%) adding: kaggle/working/train/normal/220_29.jpg (deflated 4%) adding: kaggle/working/train/normal/220_49.jpg (deflated 4%) adding: kaggle/working/train/normal/220_89.jpg (deflated 5%) adding: kaggle/working/train/normal/220_53.jpg (deflated 4%) adding: kaggle/working/train/normal/220_80.jpg (deflated 3%) adding: kaggle/working/train/normal/220_32.jpg (deflated 4%) adding: kaggle/working/train/normal/220_78.jpg (deflated 6%) adding: kaggle/working/train/normal/220_42.jpg (deflated 4%) adding: kaggle/working/train/normal/220_92.jpg (deflated 4%) adding: kaggle/working/train/normal/220_35.jpg (deflated 4%) adding: kaggle/working/train/normal/220_90.jpg (deflated 4%) adding: kaggle/working/train/normal/220_8.jpg (deflated 3%) adding: kaggle/working/train/normal/220_57.jpg (deflated 4%) adding: kaggle/working/train/normal/220_67.jpg (deflated 4%) adding: kaggle/working/train/normal/220_50.jpg (deflated 4%) adding: kaggle/working/train/normal/220_11.jpg (deflated 4%) adding: kaggle/working/train/normal/220_39.jpg (deflated 3%) adding: kaggle/working/train/normal/220_10.jpg (deflated 3%) adding: kaggle/working/train/normal/220_44.jpg (deflated 3%) adding: kaggle/working/train/normal/220_73.jpg (deflated 4%) adding: kaggle/working/train/normal/220_84.jpg (deflated 5%) adding: kaggle/working/train/normal/220_70.jpg (deflated 5%) adding: kaggle/working/train/normal/220_30.jpg (deflated 4%) adding: kaggle/working/train/normal/220_17.jpg (deflated 3%) adding: kaggle/working/train/normal/220_3.jpg (deflated 3%) adding: kaggle/working/train/normal/220_47.jpg (deflated 4%) adding: kaggle/working/train/normal/220_21.jpg (deflated 3%) adding: kaggle/working/train/normal/220_19.jpg (deflated 3%) adding: kaggle/working/train/normal/220_79.jpg (deflated 5%) adding: kaggle/working/train/normal/220_87.jpg (deflated 4%) adding: kaggle/working/train/normal/220_66.jpg (deflated 4%) adding: kaggle/working/train/normal/220_40.jpg (deflated 3%) adding: kaggle/working/train/normal/220_94.jpg (deflated 5%) adding: kaggle/working/train/normal/220_20.jpg (deflated 3%) adding: kaggle/working/train/normal/220_82.jpg (deflated 4%) adding: kaggle/working/train/normal/220_18.jpg (deflated 3%) adding: kaggle/working/train/normal/220_60.jpg (deflated 4%) adding: kaggle/working/train/normal/220_28.jpg (deflated 3%) adding: kaggle/working/train/normal/220_51.jpg (deflated 4%) adding: kaggle/working/train/normal/220_15.jpg (deflated 3%) adding: kaggle/working/train/normal/220_38.jpg (deflated 4%) adding: kaggle/working/train/normal/220_56.jpg (deflated 4%) adding: kaggle/working/train/normal/220_72.jpg (deflated 4%) adding: kaggle/working/train/normal/220_33.jpg (deflated 3%) adding: kaggle/working/train/normal/220_81.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/ (stored 0%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/FILE0.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/Volume0_0.jpg (deflated 9%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_5.jpg (deflated 7%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_4.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_0 (2).jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220 (2).jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220.jpg (deflated 7%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_1.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_6.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_0.jpg (deflated 7%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_0 (2).jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_4 (2).jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_7 (2).jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_0.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_7 (2).jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_7.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/4440_1.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_3 (2).jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_1 (2).jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/4440_2.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/4440_3.jpg (deflated 9%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_7.jpg (deflated 9%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_6 (2).jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_6 (2).jpg (deflated 8%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_2.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_2 (2).jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/4440_0.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_10.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/A0_5.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_3.jpg (deflated 7%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/A0_4.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_12.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/A0_0.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_13.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/4440.jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/FILE0_0.jpg (deflated 9%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_4 (2).jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_8.jpg (deflated 7%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_11.jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_2 (2).jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_10.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/A0_2.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_8.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_11.jpg (deflated 5%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/Volume0.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_1 (2).jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_3.jpg (deflated 9%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_11 (2).jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_14.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_12.jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_12 (2).jpg (deflated 7%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/A0_6.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_9.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_5 (2).jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/A0.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/4440_4.jpg (deflated 9%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/220_15.jpg (deflated 3%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10.jpg (deflated 4%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_1.jpg (deflated 6%) adding: kaggle/working/train/degenerative_joint_disease_of_the_thoracic_spine/10_5 (2).jpg (deflated 6%) adding: kaggle/working/train/scoliosis/ (stored 0%) adding: kaggle/working/train/scoliosis/220_5.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_16.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/220 (2).jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_29.jpg (deflated 6%) adding: kaggle/working/train/scoliosis/220.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_8.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/220_0 (2).jpg (deflated 3%) adding: kaggle/working/train/scoliosis/10_18.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/A0_22.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/10_0.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/A0_28.jpg (deflated 6%) adding: kaggle/working/train/scoliosis/10_7.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/4440_1.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/220_2.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_20.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/10_17.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_11.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/4440_2.jpg (deflated 6%) adding: kaggle/working/train/scoliosis/A0_21.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/A0_25.jpg (deflated 6%) adding: kaggle/working/train/scoliosis/4440_3.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/10_6.jpg (deflated 8%) adding: kaggle/working/train/scoliosis/220_7.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/10_2.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/4440_0.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/4440_7.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/A0_10.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/10_4.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_17.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/10_15.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/A0_5.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/10_3.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/A0_23.jpg (deflated 6%) adding: kaggle/working/train/scoliosis/A0_4.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/10_16.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/A0_24.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/A0_0.jpg (deflated 6%) adding: kaggle/working/train/scoliosis/A0_26.jpg (deflated 6%) adding: kaggle/working/train/scoliosis/10_13.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/4440_5.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/4440.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/10 (2).jpg (deflated 4%) adding: kaggle/working/train/scoliosis/A0_15.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/220_8.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_2.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/10_5.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/10_8.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_19.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/220_1 (2).jpg (deflated 3%) adding: kaggle/working/train/scoliosis/A0_12.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/A0_18.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/10_14.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/A0_6.jpg (deflated 4%) adding: kaggle/working/train/scoliosis/A0_7.jpg (deflated 5%) adding: kaggle/working/train/scoliosis/A0.jpg (deflated 3%) adding: kaggle/working/train/scoliosis/4440_4.jpg (deflated 7%) adding: kaggle/working/train/scoliosis/10.jpg (deflated 7%) adding: kaggle/working/train/scoliosis/A0_14.jpg (deflated 5%) adding: kaggle/working/test/ (stored 0%) adding: kaggle/working/test/image/ (stored 0%) adding: kaggle/working/test/image/103.jpg (deflated 4%) adding: kaggle/working/test/image/096.jpg (deflated 7%) adding: kaggle/working/test/image/008.jpg (deflated 7%) adding: kaggle/working/test/image/095.jpg (deflated 7%) adding: kaggle/working/test/image/069.jpg (deflated 6%) adding: kaggle/working/test/image/055.jpg (deflated 4%) adding: kaggle/working/test/image/016.jpg (deflated 9%) adding: kaggle/working/test/image/067.jpg (deflated 5%) adding: kaggle/working/test/image/013.jpg (deflated 8%) adding: kaggle/working/test/image/099.jpg (deflated 6%) adding: kaggle/working/test/image/003.jpg (deflated 5%) adding: kaggle/working/test/image/019.jpg (deflated 4%) adding: kaggle/working/test/image/091.jpg (deflated 4%) adding: kaggle/working/test/image/005.jpg (deflated 4%) adding: kaggle/working/test/image/081.jpg (deflated 5%) adding: kaggle/working/test/image/011.jpg (deflated 3%) adding: kaggle/working/test/image/041.jpg (deflated 6%) adding: kaggle/working/test/image/084.jpg (deflated 9%) adding: kaggle/working/test/image/089.jpg (deflated 2%) adding: kaggle/working/test/image/039.jpg (deflated 4%) adding: kaggle/working/test/image/006.jpg (deflated 7%) adding: kaggle/working/test/image/083.jpg (deflated 4%) adding: kaggle/working/test/image/094.jpg (deflated 5%) adding: kaggle/working/test/image/048.jpg (deflated 3%) adding: kaggle/working/test/image/075.jpg (deflated 8%) adding: kaggle/working/test/image/034.jpg (deflated 4%) adding: kaggle/working/test/image/111.jpg (deflated 6%) adding: kaggle/working/test/image/051.jpg (deflated 3%) adding: kaggle/working/test/image/014.jpg (deflated 8%) adding: kaggle/working/test/image/052.jpg (deflated 4%) adding: kaggle/working/test/image/010.jpg (deflated 7%) adding: kaggle/working/test/image/037.jpg (deflated 6%) adding: kaggle/working/test/image/079.jpg (deflated 8%) adding: kaggle/working/test/image/049.jpg (deflated 6%) adding: kaggle/working/test/image/025.jpg (deflated 8%) adding: kaggle/working/test/image/085.jpg (deflated 4%) adding: kaggle/working/test/image/065.jpg (deflated 4%) adding: kaggle/working/test/image/018.jpg (deflated 4%) adding: kaggle/working/test/image/063.jpg (deflated 11%) adding: kaggle/working/test/image/001.jpg (deflated 8%) adding: kaggle/working/test/image/020.jpg (deflated 4%) adding: kaggle/working/test/image/088.jpg (deflated 10%) adding: kaggle/working/test/image/007.jpg (deflated 6%) adding: kaggle/working/test/image/100.jpg (deflated 6%) adding: kaggle/working/test/image/080.jpg (deflated 4%) adding: kaggle/working/test/image/078.jpg (deflated 7%) adding: kaggle/working/test/image/105.jpg (deflated 8%) adding: kaggle/working/test/image/074.jpg (deflated 4%) adding: kaggle/working/test/image/098.jpg (deflated 8%) adding: kaggle/working/test/image/031.jpg (deflated 13%) adding: kaggle/working/test/image/109.jpg (deflated 4%) adding: kaggle/working/test/image/104.jpg (deflated 7%) adding: kaggle/working/test/image/053.jpg (deflated 4%) adding: kaggle/working/test/image/023.jpg (deflated 6%) adding: kaggle/working/test/image/045.jpg (deflated 6%) adding: kaggle/working/test/image/068.jpg (deflated 4%) adding: kaggle/working/test/image/044.jpg (deflated 5%) adding: kaggle/working/test/image/073.jpg (deflated 3%) adding: kaggle/working/test/image/040.jpg (deflated 3%) adding: kaggle/working/test/image/021.jpg (deflated 6%) adding: kaggle/working/test/image/101.jpg (deflated 9%) adding: kaggle/working/test/image/092.jpg (deflated 7%) adding: kaggle/working/test/image/015.jpg (deflated 7%) adding: kaggle/working/test/image/004.jpg (deflated 4%) adding: kaggle/working/test/image/087.jpg (deflated 3%) adding: kaggle/working/test/image/032.jpg (deflated 3%) adding: kaggle/working/test/image/072.jpg (deflated 6%) adding: kaggle/working/test/image/077.jpg (deflated 3%) adding: kaggle/working/test/image/047.jpg (deflated 4%) adding: kaggle/working/test/image/057.jpg (deflated 6%) adding: kaggle/working/test/image/108.jpg (deflated 5%) adding: kaggle/working/test/image/110.jpg (deflated 7%) adding: kaggle/working/test/image/046.jpg (deflated 4%) adding: kaggle/working/test/image/058.jpg (deflated 7%) adding: kaggle/working/test/image/066.jpg (deflated 6%) adding: kaggle/working/test/image/024.jpg (deflated 8%) adding: kaggle/working/test/image/102.jpg (deflated 7%) adding: kaggle/working/test/image/012.jpg (deflated 8%) adding: kaggle/working/test/image/026.jpg (deflated 4%) adding: kaggle/working/test/image/028.jpg (deflated 4%) adding: kaggle/working/test/image/112.jpg (deflated 4%) adding: kaggle/working/test/image/002.jpg (deflated 5%) adding: kaggle/working/test/image/070.jpg (deflated 4%) adding: kaggle/working/test/image/061.jpg (deflated 6%) adding: kaggle/working/test/image/076.jpg (deflated 5%) adding: kaggle/working/test/image/009.jpg (deflated 6%) adding: kaggle/working/test/image/029.jpg (deflated 5%) adding: kaggle/working/test/image/022.jpg (deflated 3%) adding: kaggle/working/test/image/113.jpg (deflated 7%) adding: kaggle/working/test/image/086.jpg (deflated 3%) adding: kaggle/working/test/image/056.jpg (deflated 6%) adding: kaggle/working/test/image/042.jpg (deflated 5%) adding: kaggle/working/test/image/071.jpg (deflated 8%) adding: kaggle/working/test/image/027.jpg (deflated 3%) adding: kaggle/working/test/image/093.jpg (deflated 6%) adding: kaggle/working/test/image/060.jpg (deflated 3%) adding: kaggle/working/test/image/033.jpg (deflated 3%) adding: kaggle/working/test/image/107.jpg (deflated 7%) adding: kaggle/working/test/image/038.jpg (deflated 5%) adding: kaggle/working/test/image/043.jpg (deflated 3%) adding: kaggle/working/test/image/030.jpg (deflated 4%) adding: kaggle/working/test/image/097.jpg (deflated 6%) adding: kaggle/working/test/image/059.jpg (deflated 4%) adding: kaggle/working/test/image/050.jpg (deflated 7%) adding: kaggle/working/test/image/106.jpg (deflated 9%) adding: kaggle/working/test/image/082.jpg (deflated 3%) adding: kaggle/working/test/image/054.jpg (deflated 5%) adding: kaggle/working/test/image/017.jpg (deflated 8%) adding: kaggle/working/test/image/090.jpg (deflated 4%) adding: kaggle/working/test/image/035.jpg (deflated 9%) adding: kaggle/working/test/image/036.jpg (deflated 4%) adding: kaggle/working/test/image/062.jpg (deflated 3%) adding: kaggle/working/test/image/064.jpg (deflated 4%) adding: kaggle/working/coco_eval.py (deflated 76%) adding: kaggle/working/utils.py (deflated 70%) adding: kaggle/working/.virtual_documents/ (stored 0%) adding: kaggle/working/val.json (deflated 80%) adding: kaggle/working/submission.csv (deflated 62%)